NASA Technical Reports Server (NTRS)
Mackro, J.
1973-01-01
The results are presented of a study involving closed circuit television as the means of providing the necessary task-to-operator feedback for efficient performance of the remote manipulation system. Experiments were performed to determine the remote video configuration that will result in the best overall system. Two categories of tests were conducted which include: those which involved remote control position (rate) of just the video system, and those in which closed circuit TV was used along with manipulation of the objects themselves.
Seelye, Adriana M; Wild, Katherine V; Larimer, Nicole; Maxwell, Shoshana; Kearns, Peter; Kaye, Jeffrey A
2012-12-01
Remote telepresence provided by tele-operated robotics represents a new means for obtaining important health information, improving older adults' social and daily functioning and providing peace of mind to family members and caregivers who live remotely. In this study we tested the feasibility of use and acceptance of a remotely controlled robot with video-communication capability in independently living, cognitively intact older adults. A mobile remotely controlled robot with video-communication ability was placed in the homes of eight seniors. The attitudes and preferences of these volunteers and those of family or friends who communicated with them remotely via the device were assessed through survey instruments. Overall experiences were consistently positive, with the exception of one user who subsequently progressed to a diagnosis of mild cognitive impairment. Responses from our participants indicated that in general they appreciated the potential of this technology to enhance their physical health and well-being, social connectedness, and ability to live independently at home. Remote users, who were friends or adult children of the participants, were more likely to test the mobility features and had several suggestions for additional useful applications. Results from the present study showed that a small sample of independently living, cognitively intact older adults and their remote collaterals responded positively to a remote controlled robot with video-communication capabilities. Research is needed to further explore the feasibility and acceptance of this type of technology with a variety of patients and their care contacts.
Seelye, Adriana M.; Larimer, Nicole; Maxwell, Shoshana; Kearns, Peter; Kaye, Jeffrey A.
2012-01-01
Abstract Objective: Remote telepresence provided by tele-operated robotics represents a new means for obtaining important health information, improving older adults' social and daily functioning and providing peace of mind to family members and caregivers who live remotely. In this study we tested the feasibility of use and acceptance of a remotely controlled robot with video-communication capability in independently living, cognitively intact older adults. Materials and Methods: A mobile remotely controlled robot with video-communication ability was placed in the homes of eight seniors. The attitudes and preferences of these volunteers and those of family or friends who communicated with them remotely via the device were assessed through survey instruments. Results: Overall experiences were consistently positive, with the exception of one user who subsequently progressed to a diagnosis of mild cognitive impairment. Responses from our participants indicated that in general they appreciated the potential of this technology to enhance their physical health and well-being, social connectedness, and ability to live independently at home. Remote users, who were friends or adult children of the participants, were more likely to test the mobility features and had several suggestions for additional useful applications. Conclusions: Results from the present study showed that a small sample of independently living, cognitively intact older adults and their remote collaterals responded positively to a remote controlled robot with video-communication capabilities. Research is needed to further explore the feasibility and acceptance of this type of technology with a variety of patients and their care contacts. PMID:23082794
NASA Technical Reports Server (NTRS)
White, Preston A., III
1994-01-01
The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.
Display aids for remote control of untethered undersea vehicles
NASA Technical Reports Server (NTRS)
Verplank, W. L.
1978-01-01
A predictor display superimposed on slow-scan video or sonar data is proposed as a method to allow better remote manual control of an untethered submersible. Simulation experiments show good control under circumstances which otherwise make control practically impossible.
A real-time remote video streaming platform for ultrasound imaging.
Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel
2016-08-01
Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.
Armellino, Donna; Cifu, Kelly; Wallace, Maureen; Johnson, Sherly; DiCapua, John; Dowling, Oonagh; Jacobs, Mitchel; Browning, Susan
2018-05-01
A pilot initiative to assess the use of remote video auditing in monitoring compliance with manual-cleaning protocols for endoscopic retrograde cholangiopancreatography (ERCP) endoscopes was performed. Compliance with manual-cleaning steps following the initiation of feedback was measured. A video feed of the ERCP reprocessing room was provided to remote auditors who scored items of an ERCP endoscope manual-cleaning checklist. Compliance feedback was provided in the form of reports and reeducation. Outcomes were reported as checklist compliance. The use of remote video auditing to document manual processing is a feasible approach and feedback and reeducation increased manual-cleaning compliance from 53.1% (95% confidence interval, 34.7-71.6) to 98.9% (95.0% confidence interval, 98.1-99.6). Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Bringing "Scientific Expeditions" Into the Schools
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as simulations or measurements of fluid dynamics). The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics (CFD) and wind tunnel testing. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualiZation of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: 1. The visual is much higher in resolution (1280xl024 pixels with 24 bits of color) than typical video format transmitted over the network. 2. The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). 3. A rich variety of guided expeditions through the data can be included easily. 4. A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. 5. The scenes can be viewed in 3D using stereo vision. 6. The network bandwidth used for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.)
Fast 3D Net Expeditions: Tools for Effective Scientific Collaboration on the World Wide Web
NASA Technical Reports Server (NTRS)
Watson, Val; Chancellor, Marisa K. (Technical Monitor)
1996-01-01
Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D (three dimensional), high resolution, dynamic, interactive viewing of scientific data. The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG (Motion Picture Expert Group) movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewers local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: (1) The visual is much higher in resolution (1280x1024 pixels with 24 bits of color) than typical video format transmitted over the network. (2) The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). (3) A rich variety of guided expeditions through the data can be included easily. (4) A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. (5) The scenes can be viewed in 3D using stereo vision. (6) The network bandwidth for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.) This talk will illustrate the use of these new technologies and present a proposal for using these technologies to improve science education.
From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles
NASA Technical Reports Server (NTRS)
Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric
1994-01-01
In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.
Airport Remote Tower Sensor Systems
NASA Technical Reports Server (NTRS)
Maluf, David A.; Gawdiak, Yuri; Leidichj, Christopher; Papasin, Richard; Tran, Peter B.; Bass, Kevin
2006-01-01
Networks of video cameras, meteorological sensors, and ancillary electronic equipment are under development in collaboration among NASA Ames Research Center, the Federal Aviation Administration (FAA), and the National Oceanic Atmospheric Administration (NOAA). These networks are to be established at and near airports to provide real-time information on local weather conditions that affect aircraft approaches and landings. The prototype network is an airport-approach-zone camera system (AAZCS), which has been deployed at San Francisco International Airport (SFO) and San Carlos Airport (SQL). The AAZCS includes remotely controlled color video cameras located on top of SFO and SQL air-traffic control towers. The cameras are controlled by the NOAA Center Weather Service Unit located at the Oakland Air Route Traffic Control Center and are accessible via a secure Web site. The AAZCS cameras can be zoomed and can be panned and tilted to cover a field of view 220 wide. The NOAA observer can see the sky condition as it is changing, thereby making possible a real-time evaluation of the conditions along the approach zones of SFO and SQL. The next-generation network, denoted a remote tower sensor system (RTSS), will soon be deployed at the Half Moon Bay Airport and a version of it will eventually be deployed at Los Angeles International Airport. In addition to remote control of video cameras via secure Web links, the RTSS offers realtime weather observations, remote sensing, portability, and a capability for deployment at remote and uninhabited sites. The RTSS can be used at airports that lack control towers, as well as at major airport hubs, to provide synthetic augmentation of vision for both local and remote operations under what would otherwise be conditions of low or even zero visibility.
Woolf, Celia; Caute, Anna; Haigh, Zula; Galliers, Julia; Wilson, Stephanie; Kessie, Awurabena; Hirani, Shashi; Hegarty, Barbara; Marshall, Jane
2016-04-01
To test the feasibility of a randomised controlled trial comparing face to face and remotely delivered word finding therapy for people with aphasia. A quasi-randomised controlled feasibility study comparing remote therapy delivered from a University lab, remote therapy delivered from a clinical site, face to face therapy and an attention control condition. A University lab and NHS outpatient service. Twenty-one people with aphasia following left hemisphere stroke. Eight sessions of word finding therapy, delivered either face to face or remotely, were compared to an attention control condition comprising eight sessions of remotely delivered supported conversation. The remote conditions used mainstream video conferencing technology. Feasibility was assessed by recruitment and attrition rates, participant observations and interviews, and treatment fidelity checking. Effects of therapy on word retrieval were assessed by tests of picture naming and naming in conversation. Twenty-one participants were recruited over 17 months, with one lost at baseline. Compliance and satisfaction with the intervention was good. Treatment fidelity was high for both remote and face to face delivery (1251/1421 therapist behaviours were compliant with the protocol). Participants who received therapy improved on picture naming significantly more than controls (mean numerical gains: 20.2 (remote from University); 41 (remote from clinical site); 30.8 (face to face); 5.8 (attention control); P <.001). There were no significant differences between groups in the assessment of conversation. Word finding therapy can be delivered via mainstream internet video conferencing. Therapy improved picture naming, but not naming in conversation. © The Author(s) 2015.
Telepathology. Long-distance diagnosis.
Weinstein, R S; Bloom, K J; Rozek, L S
1989-04-01
Telepathology is defined as the practice of pathology at a distance, by visualizing an image on a video monitor rather than viewing a specimen directly through a microscope. Components of a telepathology system include the following: (1) a workstation equipped with a high-resolution video camera attached to a remote-controlled light microscope; (2) a pathologist workstation incorporating controls for manipulating the robotic microscope as well as a high-resolution video monitor; and (3) a telecommunications link. Progress has been made in designing and constructing telepathology workstations and fully motorized, computer-controlled light microscopes suitable for telepathology. In addition, components such as video signal digital encoders and decoders that produce remarkably stable, high-color fidelity, and high-resolution images have been incorporated into the workstations. Resolution requirements for the video microscopy component of telepathology have been formally examined in receiver operator characteristic (ROC) curve analyses. Test-of-concept demonstrations have been completed with the use of geostationary satellites as the broadband communication linkages for 750-line resolution video. Potential benefits of telepathology include providing a means of conveniently delivering pathology services in real-time to remote sites or underserviced areas, time-sharing of pathologists' services by multiple institutions, and increasing accessibility to specialty pathologists.
Perception of synchronization errors in haptic and visual communications
NASA Astrophysics Data System (ADS)
Kameyama, Seiji; Ishibashi, Yutaka
2006-10-01
This paper deals with a system which conveys the haptic sensation experimented by a user to a remote user. In the system, the user controls a haptic interface device with another remote haptic interface device while watching video. Haptic media and video of a real object which the user is touching are transmitted to another user. By subjective assessment, we investigate the allowable range and imperceptible range of synchronization error between haptic media and video. We employ four real objects and ask each subject whether the synchronization error is perceived or not for each object in the assessment. Assessment results show that we can more easily perceive the synchronization error in the case of haptic media ahead of video than in the case of the haptic media behind the video.
Economical Video Monitoring of Traffic
NASA Technical Reports Server (NTRS)
Houser, B. C.; Paine, G.; Rubenstein, L. D.; Parham, O. Bruce, Jr.; Graves, W.; Bradley, C.
1986-01-01
Data compression allows video signals to be transmitted economically on telephone circuits. Telephone lines transmit television signals to remote traffic-control center. Lines also carry command signals from center to TV camera and compressor at highway site. Video system with television cameras positioned at critical points on highways allows traffic controllers to determine visually, almost immediately, exact cause of traffic-flow disruption; e.g., accidents, breakdowns, or spills, almost immediately. Controllers can then dispatch appropriate emergency services and alert motorists to minimize traffic backups.
Advances in the Remote Glow Discharge Experiment
NASA Astrophysics Data System (ADS)
Dominguez, Arturo; Zwicker, A.; Rusaits, L.; McNulty, M.; Sosa, Carl
2014-10-01
The Remote Glow Discharge Experiment (RGDX) is a DC discharge plasma with variable pressure, end-plate voltage and externally applied axial magnetic field. While the experiment is located at PPPL, a webcam displays the live video online. The parameters (voltage, magnetic field and pressure) can be controlled remotely in real-time by opening a URL which shows the streaming video, as well as a set of Labview controls. The RGDX is designed as an outreach tool that uses the attractive nature of a plasma in order to reach a wide audience and extend the presence of plasma physics and fusion around the world. In March 2014, the RGDX was made publically available and, as of early July, it has had approximately 3500 unique visits from 107 countries and almost all 50 US states. We present recent upgrades, including the ability to remotely control the distance between the electrodes. These changes give users the capability of measuring Paschen's Law remotely and provides a comprehensive introduction to plasma physics to those that do not have access to the necessary equipment.
NASA Technical Reports Server (NTRS)
Wilcox, Brian H.
1994-01-01
System for remote control of robotic land vehicle requires only small radio-communication bandwidth. Twin video cameras on vehicle create stereoscopic images. Operator views cross-polarized images on two cathode-ray tubes through correspondingly polarized spectacles. By use of cursor on frozen image, remote operator designates path. Vehicle proceeds to follow path, by use of limited degree of autonomous control to cope with unexpected conditions. System concept, called "computer-aided remote driving" (CARD), potentially useful in exploration of other planets, military surveillance, firefighting, and clean-up of hazardous materials.
A low cost, high performance remotely controlled backhoe/excavator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzo, J.
1995-12-31
This paper addresses a state of the art, low cost, remotely controlled backhoe/excavator system for remediation use at hazardous waste sites. The all weather, all terrain, Remote Dig-It is based on a simple, proven construction platform and incorporates state of the art sensors, control, telemetry and other subsystems derived from advanced underwater remotely operated vehicle systems. The system can be towed to a site without the use of a trailer, manually operated by an on board operator or operated via a fiber optic or optional RF communications link by a remotely positioned operator. A proportional control system is piggy backedmore » onto the standard manual control system. The control system improves manual operation, allows rapid manual/remote mode selection and provides fine manual or remote control of all functions. The system incorporates up to 4 separate video links, acoustic obstacle proximity sensors, and stereo audio pickups and an optional differential GPS navigation. Video system options include electronic panning and tilting within a distortion-corrected wide angle field of view. The backhoe/excavator subsystem has a quick disconnect interface feature which allows its use as a manipulator with a wide variety of end effectors and tools. The Remote Dig-It was developed to respond to the need for a low-cost, effective remediation system for use at sites containing hazardous materials. The prototype system was independently evaluated for this purpose by the Army at the Jefferson Proving Ground where it surpassed all performance goals. At the time of this writing, the Remote Dig-It system is currently the only backhoe/excavator which met the Army`s goals for remediation systems for use at hazardous waste sites and it costs a fraction of any known competing offerings.« less
Shipboard Calibration Network Extension Utilizing COTS Products
2014-09-01
to emulate the MCS system console. C. KEYBOARD VIDEO AND MOUSE (KVM) SWITCH A ServSwitch Wizard IP Plus KVM switch is used to allow remote access...9 C. KEYBOARD VIDEO AND MOUSE (KVM) SWITCH .......................... 10 D. ROUTER...mechanical, and electrical KVM Keyboard Video and Mouse LAN Local Area Network MCS Machinery Control Systems NIST National Institute of Standards and
Analysis and Selection of a Remote Docking Simulation Visual Display System
NASA Technical Reports Server (NTRS)
Shields, N., Jr.; Fagg, M. F.
1984-01-01
The development of a remote docking simulation visual display system is examined. Video system and operator performance are discussed as well as operator command and control requirements and a design analysis of the reconfigurable work station.
Remote control of an MR imaging study via tele-collaboration tools
NASA Astrophysics Data System (ADS)
Sullivan, John M., Jr.; Mullen, Julia S.; Benz, Udo A.; Schmidt, Karl F.; Murugavel, Murali; Chen, Wei; Ghadyani, Hamid
2005-04-01
In contrast to traditional 'video conferencing' the Access Grid (AG), developed by Argonne National Laboratory, is a collaboration of audio, video and shared application tools which provide the 'persistent presence' of each participant. Among the shared application tools are the ability to share viewing and control of presentations, browsers, images and movies. When used in conjunction with Virtual Network Computing (VNC) software, an investigator can interact with colleagues at a remote site, and control remote systems via local keyboard and mouse commands. This combination allows for effective viewing and discussion of information, i.e. data, images, and results. It is clear that such an approach when applied to the medical sciences will provide a means by which a team of experts can not only access, but interact and control medical devices for the purpose of experimentation, diagnosis, surgery and therapy. We present the development of an application node at our 4.7 Tesla MR magnet facility, and a demonstration of remote investigator control of the magnet. A local magnet operator performs manual tasks such as loading the test subject into the magnet and administering the stimulus associated with the functional MRI study. The remote investigator has complete control of the magnet console. S/he can adjust the gradient coil settings, the pulse sequence, image capture frequency, etc. A geographically distributed audience views and interacts with the remote investigator and local MR operator. This AG demonstration of MR magnet control illuminates the potential of untethered medical experiments, procedures and training.
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos
2012-06-01
When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.
General-Purpose Serial Interface For Remote Control
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Gupton, Lawrence E.
1990-01-01
Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.
FAST at MACH 20: clinical ultrasound aboard the International Space Station.
Sargsyan, Ashot E; Hamilton, Douglas R; Jones, Jeffrey A; Melton, Shannon; Whitson, Peggy A; Kirkpatrick, Andrew W; Martin, David; Dulchavsky, Scott A
2005-01-01
Focused assessment with sonography for trauma (FAST) examination has been proved accurate for diagnosing trauma when performed by nonradiologist physicians. Recent reports have suggested that nonphysicians also may be able to perform the FAST examination reliably. A multipurpose ultrasound system is installed on the International Space Station as a component of the Human Research Facility. Nonphysician crew members aboard the International Space Station receive modest training in hardware operation, sonographic techniques, and remotely guided scanning. This report documents the first FAST examination conducted in space, as part of the sustained effort to maintain the highest possible level of available medical care during long-duration space flight. An International Space Station crew member with minimal sonography training was remotely guided through a FAST examination by an ultrasound imaging expert from Mission Control Center using private real-time two-way audio and a private space-to-ground video downlink (7.5 frames/second). There was a 2-second satellite delay for both video and audio. To facilitate the real-time telemedical ultrasound examination, identical reference cards showing topologic reference points and hardware controls were available to both the crew member and the ground-based expert. A FAST examination, including four standard abdominal windows, was completed in approximately 5.5 minutes. Following commands from the Mission Control Center-based expert, the crew member acquired all target images without difficulty. The anatomic content and fidelity of the ultrasound video were excellent and would allow clinical decision making. It is possible to conduct a remotely guided FAST examination with excellent clinical results and speed, even with a significantly reduced video frame rate and a 2-second communication latency. A wider application of trauma ultrasound applications for remote medicine on earth appears to be possible and warranted.
Highly Protable Airborne Multispectral Imaging System
NASA Technical Reports Server (NTRS)
Lehnemann, Robert; Mcnamee, Todd
2001-01-01
A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Vierling, L.A.; Fersdahl, M.; Chen, X.; Li, Z.; Zimmerman, P.
2006-01-01
We describe a new remote sensing system called the Short Wave Aerostat-Mounted Imager (SWAMI). The SWAMI is designed to acquire co-located video imagery and hyperspectral data to study basic remote sensing questions and to link landscape level trace gas fluxes with spatially and temporally appropriate spectral observations. The SWAMI can fly at altitudes up to 2 km above ground level to bridge the spatial gap between radiometric measurements collected near the surface and those acquired by other aircraft or satellites. The SWAMI platform consists of a dual channel hyperspectral spectroradiometer, video camera, GPS, thermal infrared sensor, and several meteorological and control sensors. All SWAMI functions (e.g. data acquisition and sensor pointing) can be controlled from the ground via wireless transmission. Sample data from the sampling platform are presented, along with several potential scientific applications of SWAMI data.
Single-Fiber Optical Link For Video And Control
NASA Technical Reports Server (NTRS)
Galloway, F. Houston
1993-01-01
Single optical fiber carries control signals to remote television cameras and video signals from cameras. Fiber replaces multiconductor copper cable, with consequent reduction in size. Repeaters not needed. System works with either multimode- or single-mode fiber types. Nonmetallic fiber provides immunity to electromagnetic interference at suboptical frequencies and much less vulnerable to electronic eavesdropping and lightning strikes. Multigigahertz bandwidth more than adequate for high-resolution television signals.
Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu
2017-01-01
Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112
Remote presence proctoring by using a wireless remote-control videoconferencing system.
Smith, C Daniel; Skandalakis, John E
2005-06-01
Remote presence in an operating room to allow an experienced surgeon to proctor a surgeon has been promised through robotics and telesurgery solutions. Although several such systems have been developed and commercialized, little progress has been made using telesurgery for anything more than live demonstrations of surgery. This pilot project explored the use of a new videoconferencing capability to determine if it offers advantages over existing systems. The video conferencing system used is a PC-based system with a flat screen monitor and an attached camera that is then mounted on a remotely controlled platform. This device is controlled from a remotely placed PC-based videoconferencing system computer outfitted with a joystick. Using the public Internet and a wireless router at the client site, a surgeon at the control station can manipulate the videoconferencing system. Controls include navigating the unit around the room and moving the flat screen/camera portion like a head looking up/down and right/left. This system (InTouch Medical, Santa Barbara, CA) was used to proctor medical students during an anatomy class cadaver dissection. The ability of the remote surgeon to effectively monitor the students' dissections and direct their activities was assessed subjectively by students and surgeon. This device was very effective at providing a controllable and interactive presence in the anatomy lab. Students felt they were interacting with a person rather than a video screen and quickly forgot that the surgeon was not in the room. The ability to move the device within the environment rather than just observe the environment from multiple fixed camera angles gave the surgeon a similar feel of true presence. A remote-controlled videoconferencing system provides a more real experience for both student and proctor. Future development of such a device could greatly facilitate progress in implementation of remote presence proctoring.
Demonstration of the Low-Cost Virtual Collaborative Environment (VCE)
NASA Technical Reports Server (NTRS)
Bowers, David; Montes, Leticia; Ramos, Angel; Joyce, Brendan; Lumia, Ron
1997-01-01
This paper demonstrates the feasibility of a low-cost approach of remotely controlling equipment. Our demonstration system consists of a PC, the PUMA 560 robot with Barrett hand, and commercially available controller and teleconferencing software. The system provides a graphical user interface which allows a user to program equipment tasks and preview motions i.e., simulate the results. Once satisfied that the actions are both safe and accomplish the task, the remote user sends the data over the Internet to the local site for execution on the real equipment. A video link provides visual feedback to the remote sight. This technology lends itself readily to NASA's upcoming Mars expeditions by providing remote simulation and control of equipment.
Remote video assessment for missile launch facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, G.G.; Stewart, W.A.
1995-07-01
The widely dispersed, unmanned launch facilities (LFs) for land-based ICBMs (intercontinental ballistic missiles) currently do not have visual assessment capability for existing intrusion alarms. The security response force currently must assess each alarm on-site. Remote assessment will enhance manpower, safety, and security efforts. Sandia National Laboratories was tasked by the USAF Electronic Systems Center to research, recommend, and demonstrate a cost-effective remote video assessment capability at missile LFs. The project`s charter was to provide: system concepts; market survey analysis; technology search recommendations; and operational hardware demonstrations for remote video assessment from a missile LF to a remote security center viamore » a cost-effective transmission medium and without using visible, on-site lighting. The technical challenges of this project were to: analyze various video transmission media and emphasize using the existing missile system copper line which can be as long as 30 miles; accentuate and extremely low-cost system because of the many sites requiring system installation; integrate the video assessment system with the current LF alarm system; and provide video assessment at the remote sites with non-visible lighting.« less
Remote Science Operation Center research
NASA Technical Reports Server (NTRS)
Banks, P. M.
1986-01-01
Progress in the following areas is discussed: the design, planning and operation of a remote science payload operations control center; design and planning of a data link via satellite; and the design and prototyping of an advanced workstation environment for multi-media (3-D computer aided design/computer aided engineering, voice, video, text) communications and operations.
Using NetMeeting for remote configuration of the Otto Bock C-Leg: technical considerations.
Lemaire, E D; Fawcett, J A
2002-08-01
Telehealth has the potential to be a valuable tool for technical and clinical support of computer controlled prosthetic devices. This pilot study examined the use of Internet-based, desktop video conferencing for remote configuration of the Otto Bock C-Leg. Laboratory tests involved connecting two computers running Microsoft NetMeeting over a local area network (IP protocol). Over 56 Kbs(-1), DSL/Cable, and 10 Mbs(-1) LAN speeds, a prosthetist remotely configured a user's C-Leg by using Application Sharing, Live Video, and Live Audio. A similar test between sites in Ottawa and Toronto, Canada was limited by the notebook computer's 28 Kbs(-1) modem. At the 28 Kbs(-1) Internet-connection speed, NetMeeting's application sharing feature was not able to update the remote Sliders window fast enough to display peak toe loads and peak knee angles. These results support the use of NetMeeting as an accessible and cost-effective tool for remote C-Leg configuration, provided that sufficient Internet data transfer speed is available.
Automated videography for residential communications
NASA Astrophysics Data System (ADS)
Kurtz, Andrew F.; Neustaedter, Carman; Blose, Andrew C.
2010-02-01
The current widespread use of webcams for personal video communication over the Internet suggests that opportunities exist to develop video communications systems optimized for domestic use. We discuss both prior and existing technologies, and the results of user studies that indicate potential needs and expectations for people relative to personal video communications. In particular, users anticipate an easily used, high image quality video system, which enables multitasking communications during the course of real-world activities and provides appropriate privacy controls. To address these needs, we propose a potential approach premised on automated capture of user activity. We then describe a method that adapts cinematography principles, with a dual-camera videography system, to automatically control image capture relative to user activity, using semantic or activity-based cues to determine user position and motion. In particular, we discuss an approach to automatically manage shot framing, shot selection, and shot transitions, with respect to one or more local users engaged in real-time, unscripted events, while transmitting the resulting video to a remote viewer. The goal is to tightly frame subjects (to provide more detail), while minimizing subject loss and repeated abrupt shot framing changes in the images as perceived by a remote viewer. We also discuss some aspects of the system and related technologies that we have experimented with thus far. In summary, the method enables users to participate in interactive video-mediated communications while engaged in other activities.
Virtual Ultrasound Guidance for Inexperienced Operators
NASA Technical Reports Server (NTRS)
Caine, Timothy; Martin, David
2012-01-01
Medical ultrasound or echocardiographic studies are highly operator-dependent and generally require lengthy training and internship to perfect. To obtain quality echocardiographic images in remote environments, such as on-orbit, remote guidance of studies has been employed. This technique involves minimal training for the user, coupled with remote guidance from an expert. When real-time communication or expert guidance is not available, a more autonomous system of guiding an inexperienced operator through an ultrasound study is needed. One example would be missions beyond low Earth orbit in which the time delay inherent with communication will make remote guidance impractical. The Virtual Ultrasound Guidance system is a combination of hardware and software. The hardware portion includes, but is not limited to, video glasses that allow hands-free, full-screen viewing. The glasses also allow the operator a substantial field of view below the glasses to view and operate the ultrasound system. The software is a comprehensive video program designed to guide an inexperienced operator through a detailed ultrasound or echocardiographic study without extensive training or guidance from the ground. The program contains a detailed description using video and audio to demonstrate equipment controls, ergonomics of scanning, study protocol, and scanning guidance, including recovery from sub-optimal images. The components used in the initial validation of the system include an Apple iPod Classic third-generation as the video source, and Myvue video glasses. Initially, the program prompts the operator to power-up the ultrasound and position the patient. The operator would put on the video glasses and attach them to the video source. After turning on both devices and the ultrasound system, the audio-video guidance would then instruct on patient positioning and scanning techniques. A detailed scanning protocol follows with descriptions and reference video of each view along with advice on technique. The program also instructs the operator regarding the types of images to store and how to overcome pitfalls in scanning. Images can be forwarded to the ground or other site when convenient. Following study completion, the video glasses, video source, and ultrasound system are powered down and stored. Virtually any equipment that can play back video can be used to play back the program. This includes a DVD player, personal computer, and some MP3 players.
Distributed observing facility for remote access to multiple telescopes
NASA Astrophysics Data System (ADS)
Callegari, Massimo; Panciatici, Antonio; Pasian, Fabio; Pucillo, Mauro; Santin, Paolo; Aro, Simo; Linde, Peter; Duran, Maria A.; Rodriguez, Jose A.; Genova, Francoise; Ochsenbein, Francois; Ponz, J. D.; Talavera, Antonio
2000-06-01
The REMOT (Remote Experiment Monitoring and conTrol) project was financed by 1996 by the European Community in order to investigate the possibility of generalizing the remote access to scientific instruments. After the feasibility of this idea was demonstrated, the DYNACORE (DYNAmically, COnfigurable Remote Experiment monitoring and control) project was initiated as a REMOT follow-up. Its purpose is to develop software technology to support scientists in two different domains, astronomy and plasma physics. The resulting system allows (1) simultaneous multiple user access to different experimental facilities, (2) dynamic adaptability to different kinds of real instruments, (3) exploitation of the communication infrastructures features, (4) ease of use through intuitive graphical interfaces, and (5) additional inter-user communication using off-the-shelf projects such as video-conference tools, chat programs and shared blackboards.
Drummond, David; Arnaud, Cécile; Guedj, Romain; Duguet, Alexandre; de Suremain, Nathalie; Petit, Arnaud
2017-02-01
To determine whether real-time video communication between the first responder and a remote intensivist via Google Glass improves the management of a simulated in-hospital pediatric cardiopulmonary arrest before the arrival of the ICU team. Randomized controlled study. Children's hospital at a tertiary care academic medical center. Forty-two first-year pediatric residents. Pediatric residents were evaluated during two consecutive simulated pediatric cardiopulmonary arrests with a high-fidelity manikin. During the second evaluation, the residents in the Google Glass group were allowed to seek help from a remote intensivist at any time by activating real-time video communication. The residents in the control group were asked to provide usual care. The main outcome measures were the proportion of time for which the manikin received no ventilation (no-blow fraction) or no compression (no-flow fraction). In the first evaluation, overall no-blow and no-flow fractions were 74% and 95%, respectively. During the second evaluation, no-blow and no-flow fractions were similar between the two groups. Insufflations were more effective (p = 0.04), and the technique (p = 0.02) and rate (p < 0.001) of chest compression were more appropriate in the Google Glass group than in the control group. Real-time video communication between the first responder and a remote intensivist through Google Glass did not decrease no-blow and no-flow fractions during the first 5 minutes of a simulated pediatric cardiopulmonary arrest but improved the quality of the insufflations and chest compressions provided.
Supervisory autonomous local-remote control system design: Near-term and far-term applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul
1993-01-01
The JPL Supervisory Telerobotics Laboratory (STELER) has developed a unique local-remote robot control architecture which enables management of intermittent bus latencies and communication delays such as those expected for ground-remote operation of Space Station robotic systems via the TDRSS communication platform. At the local site, the operator updates the work site world model using stereo video feedback and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. The operator can then employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the object under any degree of time-delay. The remote site performs the closed loop force/torque control, task monitoring, and reflex action. This paper describes the STELER local-remote robot control system, and further describes the near-term planned Space Station applications, along with potential far-term applications such as telescience, autonomous docking, and Lunar/Mars rovers.
2009-11-01
times were shorter, collisions were fewer, and more targets were photographed. Effects of video game experience and spatial ability were also...Control Spatial ability, video game , user-interface, remote control, robot TR 1230 The Perception and Estimation of Egocentric Distance in Real and...development by RDECOM-STTC, and ARI is using the AW-VTT to research challenges in the use of distributed, game -based simulations for training
Researching on the process of remote sensing video imagery
NASA Astrophysics Data System (ADS)
Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan
Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.
Marchell, Richard; Locatis, Craig; Burges, Gene; Maisiak, Richard; Liu, Wei-Li; Ackerman, Michael
2017-03-01
There is little teledermatology research directly comparing remote methods, even less research with two in-person dermatologist agreement providing a baseline for comparing remote methods, and no research using high definition video as a live interactive method. To compare in-person consultations with store-and-forward and live interactive methods, the latter having two levels of image quality. A controlled study was conducted where patients were examined in-person, by high definition video, and by store-and-forward methods. The order patients experienced methods and residents assigned methods rotated, although an attending always saw patients in-person. The type of high definition video employed, lower resolution compressed or higher resolution uncompressed, was alternated between clinics. Primary and differential diagnoses, biopsy recommendations, and diagnostic and biopsy confidence ratings were recorded. Concordance and confidence were significantly better for in-person versus remote methods and biopsy recommendations were lower. Store-and-forward and higher resolution uncompressed video results were similar and better than those for lower resolution compressed video. Dermatology residents took store-and-forward photos and their quality was likely superior to those normally taken in practice. There were variations in expertise between the attending and second and third year residents. The superiority of in-person consultations suggests the tendencies to order more biopsies or still see patients in-person are often justified in teledermatology and that high resolution uncompressed video can close the resolution gap between store-and-forward and live interactive methods.
Integrated remotely sensed datasets for disaster management
NASA Astrophysics Data System (ADS)
McCarthy, Timothy; Farrell, Ronan; Curtis, Andrew; Fotheringham, A. Stewart
2008-10-01
Video imagery can be acquired from aerial, terrestrial and marine based platforms and has been exploited for a range of remote sensing applications over the past two decades. Examples include coastal surveys using aerial video, routecorridor infrastructures surveys using vehicle mounted video cameras, aerial surveys over forestry and agriculture, underwater habitat mapping and disaster management. Many of these video systems are based on interlaced, television standards such as North America's NTSC and European SECAM and PAL television systems that are then recorded using various video formats. This technology has recently being employed as a front-line, remote sensing technology for damage assessment post-disaster. This paper traces the development of spatial video as a remote sensing tool from the early 1980s to the present day. The background to a new spatial-video research initiative based at National University of Ireland, Maynooth, (NUIM) is described. New improvements are proposed and include; low-cost encoders, easy to use software decoders, timing issues and interoperability. These developments will enable specialists and non-specialists collect, process and integrate these datasets within minimal support. This integrated approach will enable decision makers to access relevant remotely sensed datasets quickly and so, carry out rapid damage assessment during and post-disaster.
VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.
ERIC Educational Resources Information Center
Ekman, Paul; And Others
The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…
ERIC Educational Resources Information Center
Popple, Ben; Wall, Carla; Flink, Lilli; Powell, Kelly; Discepolo, Keri; Keck, Douglas; Mademtzi, Marilena; Volkmar, Fred; Shic, Frederick
2016-01-01
Children with autism have heightened risk of developing oral health problems. Interventions targeting at-home oral hygiene habits may be the most effective means of improving oral hygiene outcomes in this population. This randomized control trial examined the effectiveness of a 3-week video-modeling brushing intervention delivered to patients over…
COTS technologies for telemedicine applications.
Triunfo, Riccardo; Tumbarello, Roberto; Sulis, Alessandro; Zanetti, Gianluigi; Lianas, Luca; Meloni, Vittorio; Frexia, Francesca
2010-01-01
To demonstrate a simple low-cost system for tele-echocardiology, focused on paediatric cardiology applications. The system was realized using open-source software and COTS technologies. It is based on the transmission of two simultaneous video streams, obtained by direct digitization of the output of an ultrasound machine and by a netcam showing the examination that is taking place. These streams are then embedded into a web page so they are accessible, together with basic video controls, via a standard web browser. The system can also record video streams on a server for further use. The system was tested on a small group of neonatal cases with suspected cardiopathies for a preliminary assessment of its features and diagnostic capabilities. Both the clinical and technological results were encouraging and are leading the way for further experimentation. The presented system can transfer clinical images and videos in an efficient way and in real time. It can be used in the same hospital to support internal consultancy requests, in remote areas using Internet connections and for didactic purposes using low cost COTS appliances and simple interfaces for end users. The solution proposed can be extended to control different medical appliances in those remote hospitals.
Movable Cameras And Monitors For Viewing Telemanipulator
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Venema, Steven C.
1993-01-01
Three methods proposed to assist operator viewing telemanipulator on video monitor in control station when video image generated by movable video camera in remote workspace of telemanipulator. Monitors rotated or shifted and/or images in them transformed to adjust coordinate systems of scenes visible to operator according to motions of cameras and/or operator's preferences. Reduces operator's workload and probability of error by obviating need for mental transformations of coordinates during operation. Methods applied in outer space, undersea, in nuclear industry, in surgery, in entertainment, and in manufacturing.
Data Reduction and Control Software for Meteor Observing Stations Based on CCD Video Systems
NASA Technical Reports Server (NTRS)
Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.
2011-01-01
The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.
IMIS: An intelligence microscope imaging system
NASA Technical Reports Server (NTRS)
Caputo, Michael; Hunter, Norwood; Taylor, Gerald
1994-01-01
Until recently microscope users in space relied on traditional microscopy techniques that required manual operation of the microscope and recording of observations in the form of written notes, drawings, or photographs. This method was time consuming and required the return of film and drawings from space for analysis. No real-time data analysis was possible. Advances in digital and video technologies along with recent developments in article intelligence will allow future space microscopists to have a choice of three additional modes of microscopy: remote coaching, remote control, and automation. Remote coaching requires manual operations of the microscope with instructions given by two-way audio/video transmission during critical phases of the experiment. When using the remote mode of microscopy, the Principal Investigator controls the microscope from the ground. The automated mode employs artificial intelligence to control microscope functions and is the only mode that can be operated in the other three modes as well. The purpose of this presentation is to discuss the advantages and disadvantages of the four modes of of microscopy and how the IMIS, a proposed intelligent microscope imaging system, can be used as a model for developing and testing concepts, operating procedures, and equipment design of specifications required to provide a comprehensive microscopy/imaging capability onboard Space Station Freedom.
MIT-NASA/KSC space life science experiments - A telescience testbed
NASA Technical Reports Server (NTRS)
Oman, Charles M.; Lichtenberg, Byron K.; Fiser, Richard L.; Vordermark, Deborah S.
1990-01-01
Experiments performed at MIT to better define Space Station information system telescience requirements for effective remote coaching of astronauts by principal investigators (PI) on the ground are described. The experiments were conducted via satellite video, data, and voice links to surrogate crewmembers working in a laboratory at NASA's Kennedy Space Center. Teams of two PIs and two crewmembers performed two different space life sciences experiments. During 19 three-hour interactive sessions, a variety of test conditions were explored. Since bit rate limits are necessarily imposed on Space Station video experiments surveillance video was varied down to 50 Kb/s and the effectiveness of PI controlled frame rate, resolution, grey scale, and color decimation was investigated. It is concluded that remote coaching by voice works and that dedicated crew-PI voice loops would be of great value on the Space Station.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H
2015-02-01
Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.
NASA Astrophysics Data System (ADS)
Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui
2017-01-01
Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.
Use of an intuitive telemanipulator system for remote trauma surgery: an experimental study.
Bowersox, J C; Cordts, P R; LaPorta, A J
1998-06-01
Death from battlefield trauma occurs rapidly. Potentially salvageable casualties generally exsanguinate from truncal hemorrhage before operative intervention is possible. An intuitive telemanipulator system that would allow distant surgeons to remotely treat injured patients could improve the outcome from severe injuries. We evaluated a prototype, four-degree-of-freedom, telesurgery system that provides a surgeon with a stereoscopic video display of a remote operative field. Using dexterous robotic manipulators, surgical instruments at the remote site can be precisely controlled, enabling operative procedures to be performed remotely. Surgeons (n = 3) used the telesurgery system to perform organ excision, hemorrhage control, suturing, and knot tying on anesthetized swine. The ability to complete tasks, times required, technical quality, and subjective impressions were recorded. Surgeons using the telesurgery system were able to close gastrotomies remotely, although times required were 2.7 times as long as those performed by conventional techniques (451 +/- 83 versus 1,235 +/- 165 seconds, p < 0.002). Cholecystectomies, hemorrhage control from liver lacerations, and enterotomy closures were successfully completed in all attempts. Force feedback and stereoscopic video display were important for achieving intuitive performance with the telesurgery system, although tasks were completed adequately in the absence of these sensory cues. We demonstrated the feasibility of performing standard surgical procedures remotely, with the operating surgeon linked to the distant field only by electronic cabling. Complex manipulations were possible, although the times required were much longer. The capabilities of the system used would not support resuscitative surgery. Telesurgery is unlikely to play a role in early trauma management, but may be a unique research tool for acquiring basic knowledge of operative surgery.
49 CFR 174.67 - Tank car unloading.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (2) Monitored by a signaling system (e.g., video system, sensing equipment, or mechanical equipment... or at a remote location within the facility, such as a control room. The signaling system must— (i...
Locatis, Craig; Burges, Gene; Maisiak, Richard; Liu, Wei-Li; Ackerman, Michael
2017-01-01
Abstract Background: There is little teledermatology research directly comparing remote methods, even less research with two in-person dermatologist agreement providing a baseline for comparing remote methods, and no research using high definition video as a live interactive method. Objective: To compare in-person consultations with store-and-forward and live interactive methods, the latter having two levels of image quality. Methods: A controlled study was conducted where patients were examined in-person, by high definition video, and by store-and-forward methods. The order patients experienced methods and residents assigned methods rotated, although an attending always saw patients in-person. The type of high definition video employed, lower resolution compressed or higher resolution uncompressed, was alternated between clinics. Primary and differential diagnoses, biopsy recommendations, and diagnostic and biopsy confidence ratings were recorded. Results: Concordance and confidence were significantly better for in-person versus remote methods and biopsy recommendations were lower. Store-and-forward and higher resolution uncompressed video results were similar and better than those for lower resolution compressed video. Limitations: Dermatology residents took store-and-forward photos and their quality was likely superior to those normally taken in practice. There were variations in expertise between the attending and second and third year residents. Conclusion: The superiority of in-person consultations suggests the tendencies to order more biopsies or still see patients in-person are often justified in teledermatology and that high resolution uncompressed video can close the resolution gap between store-and-forward and live interactive methods. PMID:27705083
Popple, Ben; Wall, Carla; Flink, Lilli; Powell, Kelly; Discepolo, Keri; Keck, Douglas; Mademtzi, Marilena; Volkmar, Fred; Shic, Frederick
2016-08-01
Children with autism have heightened risk of developing oral health problems. Interventions targeting at-home oral hygiene habits may be the most effective means of improving oral hygiene outcomes in this population. This randomized control trial examined the effectiveness of a 3-week video-modeling brushing intervention delivered to patients over the internet. Eighteen children with autism were assigned to an Intervention or Control video condition. Links to videos were delivered via email twice daily. Blind clinical examiners provided plaque index ratings at baseline, midpoint, and endpoint. Results show oral hygiene improvements in both groups, with larger effect sizes in the Intervention condition. The findings provide preliminary support for the use of internet-based interventions to improve oral hygiene for children with autism.
Detection Thresholds for Rotation and Translation Gains in 360° Video-Based Telepresence Systems.
Zhang, Jingxin; Langbehn, Eike; Krupke, Dennis; Katzakis, Nicholas; Steinicke, Frank
2018-04-01
Telepresence systems have the potential to overcome limits and distance constraints of the real-world by enabling people to remotely visit and interact with each other. However, current telepresence systems usually lack natural ways of supporting interaction and exploration of remote environments (REs). In particular, single webcams for capturing the RE provide only a limited illusion of spatial presence, and movement control of mobile platforms in today's telepresence systems are often restricted to simple interaction devices. One of the main challenges of telepresence systems is to allow users to explore a RE in an immersive, intuitive and natural way, e.g., by real walking in the user's local environment (LE), and thus controlling motions of the robot platform in the RE. However, the LE in which the user's motions are tracked usually provides a much smaller interaction space than the RE. In this context, redirected walking (RDW) is a very suitable approach to solve this problem. However, so far there is no previous work, which explored if and how RDW can be used in video-based 360° telepresence systems. In this article, we conducted two psychophysical experiments in which we have quantified how much humans can be unknowingly redirected on virtual paths in the RE, which are different from the physical paths that they actually walk in the LE. Experiment 1 introduces a discrimination task between local and remote translations, and in Experiment 2 we analyzed the discrimination between local and remote rotations. In Experiment 1 participants performed straightforward translations in the LE that were mapped to straightforward translations in the RE shown as 360° videos, which were manipulated by different gains. Then, participants had to estimate if the remotely perceived translation was faster or slower than the actual physically performed translation. Similarly, in Experiment 2 participants performed rotations in the LE that were mapped to the virtual rotations in a 360° video-based RE to which we applied different gains. Again, participants had to estimate whether the remotely perceived rotation was smaller or larger than the actual physically performed rotation. Our results show that participants are not able to reliably discriminate the difference between physical motion in the LE and the virtual motion from the 360° video RE when virtual translations are down-scaled by 5.8% and up-scaled by 9.7%, and virtual rotations are about 12.3% less or 9.2% more than the corresponding physical rotations in the LE.
Unmanned ground vehicles for integrated force protection
NASA Astrophysics Data System (ADS)
Carroll, Daniel M.; Mikell, Kenneth; Denewiler, Thomas
2004-09-01
The combination of Command and Control (C2) systems with Unmanned Ground Vehicles (UGVs) provides Integrated Force Protection from the Robotic Operation Command Center. Autonomous UGVs are directed as Force Projection units. UGV payloads and fixed sensors provide situational awareness while unattended munitions provide a less-than-lethal response capability. Remote resources serve as automated interfaces to legacy physical devices such as manned response vehicles, barrier gates, fence openings, garage doors, and remote power on/off capability for unmanned systems. The Robotic Operations Command Center executes the Multiple Resource Host Architecture (MRHA) to simultaneously control heterogeneous unmanned systems. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. A control hierarchy of missions and duty rosters support autonomous operations. This paper provides an overview of the key technology enablers for Integrated Force Protection with details on a force-on-force scenario to test and demonstrate concept of operations using Unmanned Ground Vehicles. Special attention is given to development and applications for the Remote Detection Challenge and Response (REDCAR) initiative for Integrated Base Defense.
Remote stereoscopic video play platform for naked eyes based on the Android system
NASA Astrophysics Data System (ADS)
Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng
2014-11-01
As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.
System for training and evaluation of security personnel in use of firearms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, H.F.
This patent describes an interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has drawn anmore » infrared laser handgun from his holster, fired his laser handgun, taken cover, advanced or retreated from the adversary on the screen, and when the adversary has fired his gun at the trainee.« less
System for training and evaluation of security personnel in use of firearms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, H.F.
An interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has drawn an infrared laser handgunmore » from high holster, fired his laser handgun, taken cover, advanced or retreated from the adversary on the screen, and when the adversary has fired his gun at the trainee. 8 figs.« less
System for training and evaluation of security personnel in use of firearms
Hall, Howard F.
1990-01-01
An interactive video display system comprising a laser disc player with a remote large-screen projector to view life-size video scenarios and a control computer. A video disc has at least one basic scenario and one or more branches of the basic scenario with one or more subbranches from any one or more of the branches and further subbranches, if desired, to any level of programming desired. The control computer is programmed for interactive control of the branching, and control of other effects that enhance the scenario, in response to detection of when the trainee has (1) drawn an infrared laser handgun from his holster, (2) fired his laser handgun, (3) taken cover, (4) advanced or retreated from the adversary on the screen, and (5) when the adversary has fired his gun at the trainee.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.
2015-01-01
Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851
Intelligent viewing control for robotic and automation systems
NASA Astrophysics Data System (ADS)
Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.
1994-10-01
We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.
NASA Technical Reports Server (NTRS)
Cohen, Tamar E.; Lees, David S.; Deans, Matthew C.; Lim, Darlene S. S.; Lee, Yeon Jin Grace
2018-01-01
Exploration Ground Data Systems (xGDS) supports rapid scientific decision making by synchronizing video in context with map, instrument data visualization, geo-located notes and any other collected data. xGDS is an open source web-based software suite developed at NASA Ames Research Center to support remote science operations in analog missions and prototype solutions for remote planetary exploration. (See Appendix B) Typical video systems are designed to play or stream video only, independent of other data collected in the context of the video. Providing customizable displays for monitoring live video and data as well as replaying recorded video and data helps end users build up a rich situational awareness. xGDS was designed to support remote field exploration with unreliable networks. Commercial digital recording systems operate under the assumption that there is a stable and reliable network between the source of the video and the recording system. In many field deployments and space exploration scenarios, this is not the case - there are both anticipated and unexpected network losses. xGDS' Video Module handles these interruptions, storing the available video, organizing and characterizing the dropouts, and presenting the video for streaming or replay to the end user including visualization of the dropouts. Scientific instruments often require custom or expensive software to analyze and visualize collected data. This limits the speed at which the data can be visualized and limits access to the data to those users with the software. xGDS' Instrument Module integrates with instruments that collect and broadcast data in a single snapshot or that continually collect and broadcast a stream of data. While seeing a visualization of collected instrument data is informative, showing the context for the collected data, other data collected nearby along with events indicating current status helps remote science teams build a better understanding of the environment. Further, sharing geo-located, tagged notes recorded by the scientists and others on the team spurs deeper analysis of the data.
Video streaming technologies using ActiveX and LabVIEW
NASA Astrophysics Data System (ADS)
Panoiu, M.; Rat, C. L.; Panoiu, C.
2015-06-01
The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.
Implementation of a stereofluoroscopic system
NASA Technical Reports Server (NTRS)
Rivers, D. B.
1976-01-01
Clinical applications of a 3-D video imaging technique developed by NASA for observation and control of remote manipulators are discussed. Incorporation of this technique in a stereo fluoroscopic system provides reduced radiation dosage and greater vision and mobility of the user.
Google glass-based remote control of a mobile robot
NASA Astrophysics Data System (ADS)
Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe
2016-05-01
In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.
NASA Technical Reports Server (NTRS)
Stoker, Carol
1994-01-01
This paper will describe a series of field experiments to develop and demonstrate file use of Telepresence and Virtual Reality systems for controlling rover vehicles on planetary surfaces. In 1993, NASA Ames deployed a Telepresence-Controlled Remotely Operated underwater Vehicle (TROV) into an ice-covered sea environment in Antarctica. The goal of the mission was to perform scientific exploration of an unknown environment using a remote vehicle with telepresence and virtual reality as a user interface. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research center, for over two months. Remote control used a bidirectional Internet link to the vehicle control computer. The operator viewed live stereo video from the TROV along with a computer-gene rated graphic representation of the underwater terrain showing file vehicle state and other related information. Tile actual vehicle could be driven either from within the virtual environment or through a telepresence interface. In March 1994, a second field experiment was performed in which [lie remote control system developed for the Antarctic TROV mission was used to control the Russian Marsokhod Rover, an advanced planetary surface rover intended for launch in 1998. Marsokhod consists of a 6-wheel chassis and is capable of traversing several kilometers of terrain each day, The rover can be controlled remotely, but is also capable of performing autonomous traverses. The rover was outfitted with a manipulator arm capable of deploying a small instrument, collecting soil samples, etc. The Marsokhod rover was deployed at Amboy Crater in the Mojave desert, a Mars analog site, and controlled remotely from Los Angeles. in two operating modes: (1) a Mars rover mission simulation with long time delay and (2) a Lunar rover mission simulation with live action video. A team of planetary geologists participated in the mission simulation. The scientific goal of the science mission was to determine what could be learned about the geologic context of the site using the capabilities of imaging and mobility provided by the Marsokhod system in these two modes of operation. I will discuss the lessons learned from these experiments in terms of the strategy for performing Mars surface exploration using rovers. This research is supported by the Solar System Exploration Exobiology, Geology, and Advanced Technology programs.
Kim, Changsun; Cha, Hyunmin; Kang, Bo Seung; Choi, Hyuk Joong; Lim, Tae Ho; Oh, Jaehoon
2016-06-01
Our aim was to prove the feasibility of the remote interpretation of real-time transmitted ultrasound videos of dynamic and static organs using a smartphone with control of the image quality given a limited internet connection speed. For this study, 100 cases of echocardiography videos (dynamic organ)-50 with an ejection fraction (EF) of ≥50 s and 50 with EF <50 %-and 100 cases of suspected pediatric appendicitis (static organ)-50 with signs of acute appendicitis and 50 with no findings of appendicitis-were consecutively selected. Twelve reviewers reviewed the original videos using the liquid crystal display (LCD) monitor of an ultrasound machine and using a smartphone, to which the images were transmitted from the ultrasound machine. The resolution of the transmitted echocardiography videos was reduced by approximately 20 % to increase the frame rate of transmission given the limited internet speed. The differences in diagnostic performance between the two devices when evaluating left ventricular (LV) systolic function by measuring the EF and when evaluating the presence of acute appendicitis were investigated using a five-point Likert scale. The average areas under the receiver operating characteristic curves for each reviewer's interpretations using the LCD monitor and smartphone were respectively 0.968 (0.949-0.986) and 0.963 (0.945-0.982) (P = 0.548) for echocardiography and 0.972 (0.954-0.989) and 0.966 (0.947-0.984) (P = 0.175) for abdominal ultrasonography. We confirmed the feasibility of remotely interpreting ultrasound images using smartphones, specifically for evaluating LV function and diagnosing pediatric acute appendicitis; the images were transferred from the ultrasound machine using image quality-controlled telesonography.
Head-coupled remote stereoscopic camera system for telepresence applications
NASA Astrophysics Data System (ADS)
Bolas, Mark T.; Fisher, Scott S.
1990-09-01
The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.
Parker, Alton; Rubinfeld, Ilan; Azuh, Ogochukwu; Blyden, Dionne; Falvo, Anthony; Horst, Mathilda; Velanovich, Vic; Patton, Pat
2010-03-01
Technology currently exists for the application of remote guidance in the laparoscopic operating suite. However, these solutions are costly and require extensive preparation and reconfiguration of current hardware. We propose a solution from existing technology, to send video of laparoscopic cholecystectomy to the Blackberry Pearl device (RIM Waterloo, ON, Canada) for remote guidance purposes. This technology is time- and cost-efficient, as well as reliable. After identification of the critical maneuver during a laparoscopic cholecystectomy as the division of the cystic duct, we captured a segment of video before it's transection. Video was captured using the laparoscopic camera input sent via DVI2USB Solo Frame Grabber (Epiphan Ottawa, Canada) to a video recording application on a laptop. Seven- to 40-second video clips were recorded. The video clip was then converted to an .mp4 file and was uploaded to our server and a link was then sent to the consultant via e-mail. The consultant accessed the file via Blackberry for viewing. After reviewing the video, the consultant was able to confidently comment on the operation. Approximately 7 to 40 seconds of 10 laparoscopic cholecystectomies were recorded and transferred to the consultant using our method. All 10 video clips were reviewed and deemed adequate for decision making. Remote guidance for laparoscopic cholecystectomy with existing technology can be accomplished with relatively low cost and minimal setup. Additional evaluation of our methods will aim to identify reliability, validity, and accuracy. Using our method, other forms of remote guidance may be feasible, such as other laparoscopic procedures, diagnostic ultrasonography, and remote intensive care unit monitoring. In addition, this method of remote guidance may be extended to centers with smaller budgets, allowing ubiquitous use of neighboring consultants and improved safety for our patients. Copyright (c) 2010 Elsevier Inc. All rights reserved.
14. NBS REMOTE MANIPULATOR SIMULATOR (RMS) CONTROL ROOM. THE RMS ...
14. NBS REMOTE MANIPULATOR SIMULATOR (RMS) CONTROL ROOM. THE RMS CONTROL PANEL IS IDENTICAL TO THE SHUTTLE ORBITER AFT FLIGHT DECK WITH ALL RMS SWITCHES AND CONTROL KNOBS FOR INVOKING ANY POSSIBLE FLIGHT OPERATIONAL MODE. THIS INCLUDES ALL COMPUTER AIDED OPERATIONAL MODES, AS WELL AS FULL MANUAL MODE. THE MONITORS IN THE AFT FLIGHT DECK WINDOWS AND THE GLASSES THE OPERATOR WEARS PROVIDE A 3-D VIDEO PICTURE TO AID THE OPERATOR WITH DEPTH PERCEPTION WHILE OPERATING THE ARM. THIS IS REQUIRED BECAUSE THE RMS OPERATOR CANNOT VIEW RMS MOVEMENTS IN THE WATER WHILE AT THE CONTROL PANEL. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Li, Chengqing; Lü, Jinhu; Wang, Qianxue
This paper proposes a chaotic secure video remote communication scheme that can perform on real WAN networks, and implements it on a smartphone hardware platform. First, a joint encryption and compression scheme is designed by embedding a chaotic encryption scheme into the MJPG-Streamer source codes. Then, multiuser smartphone communications between the sender and the receiver are implemented via WAN remote transmission. Finally, the transmitted video data are received with the given IP address and port in an Android smartphone. It should be noted that, this is the first time that chaotic video encryption schemes are implemented on such a hardware platform. The experimental results demonstrate that the technical challenges on hardware implementation of secure video communication are successfully solved, reaching a balance amongst sufficient security level, real-time processing of massive video data, and utilization of available resources in the hardware environment. The proposed scheme can serve as a good application example of chaotic secure communications for smartphone and other mobile facilities in the future.
Becker, R L; Specht, C S; Jones, R; Rueda-Pedraza, M E; O'Leary, T J
1993-08-01
We investigated the use of remote video microscopy (telepathology) to assist in the diagnosis of 52 neurosurgical frozen section cases. The TelMed system (Discovery Medical Systems, Overland Park, KS), in which the referring pathologist selects appropriate fields for transmission to the consultant, was used for the study. There was a high degree of concordance between the diagnosis rendered on the basis of transmitted video images and that rendered on the basis of direct evaluation of frozen sections; however, in seven cases there was substantial disagreement. Remote evaluation was associated with a more rapid consultation from the standpoint of the consultant, who spent approximately 2 minutes less per case when using remote microscopy; this was achieved at the expense of considerably greater effort on the part of the referring pathologist, who spent approximately 16 minutes per case selecting an average of 4.5 images for transmission to the consultant. The use of remote video microscopy for pathology consultation is associated with a complex series of tradeoffs involving cost, information loss, and timeliness of consultation.
Xiao, Y; MacKenzie, C; Orasanu, J; Spencer, R; Rahman, A; Gunawardane, V
1999-01-01
To determine what information sources are used during a remote diagnosis task. Experienced trauma care providers viewed segments of videotaped initial trauma patient resuscitation and airway management. Experiment 1 collected responses from anesthesiologists to probing questions during and after the presentation of recorded video materials. Experiment 2 collected the responses from three types of care providers (anesthesiologists, nurses, and surgeons). Written and verbal responses were scored according to detection of critical events in video materials and categorized according to their content. Experiment 3 collected visual scanning data using an eyetracker during the viewing of recorded video materials from the three types of care providers. Eye-gaze data were analyzed in terms of focus on various parts of the videotaped materials. Care providers were found to be unable to detect several critical events. The three groups of subjects studied (anesthesiologists, nurses, and surgeons) focused on different aspects of videotaped materials. When the remote events and activities are multidisciplinary and rapidly changing, experts linked with audio-video-data connections may encounter difficulties in comprehending remote activities, and their information usage may be biased. Special training is needed for the remote decision-maker to appreciate tasks outside his or her speciality and beyond the boundaries of traditional divisions of labor.
Cardiac ultrasonography over 4G wireless networks using a tele-operated robot
Panayides, Andreas S.; Jossif, Antonis P.; Christoforou, Eftychios G.; Vieyres, Pierre; Novales, Cyril; Voskarides, Sotos; Pattichis, Constantinos S.
2016-01-01
This Letter proposes an end-to-end mobile tele-echography platform using a portable robot for remote cardiac ultrasonography. Performance evaluation investigates the capacity of long-term evolution (LTE) wireless networks to facilitate responsive robot tele-manipulation and real-time ultrasound video streaming that qualifies for clinical practice. Within this context, a thorough video coding standards comparison for cardiac ultrasound applications is performed, using a data set of ten ultrasound videos. Both objective and subjective (clinical) video quality assessment demonstrate that H.264/AVC and high efficiency video coding standards can achieve diagnostically-lossless video quality at bitrates well within the LTE supported data rates. Most importantly, reduced latencies experienced throughout the live tele-echography sessions allow the medical expert to remotely operate the robot in a responsive manner, using the wirelessly communicated cardiac ultrasound video to reach a diagnosis. Based on preliminary results documented in this Letter, the proposed robotised tele-echography platform can provide for reliable, remote diagnosis, achieving comparable quality of experience levels with in-hospital ultrasound examinations. PMID:27733929
Hand held phase-shifting diffraction moire interferometer
Deason, Vance A.; Ward, Michael B.
1994-01-01
An interferometer in which a coherent beam of light is generated within a remote case and transmitted to a hand held unit tethered to said remote case, said hand held unit having optical elements for directing a pair of mutually coherent collimated laser beams at a diffraction grating. Data from the secondary or diffracted beams are then transmitted to a separate video and data acquisition system for recording and analysis for load induced deformation or for identification purposes. Means are also provided for shifting the phase of one incident beam relative to the other incident beam and being controlled from within said remote case.
Teaching surgical skills using video internet communication in a resource-limited setting.
Autry, Amy M; Knight, Sharon; Lester, Felicia; Dubowitz, Gerald; Byamugisha, Josaphat; Nsubuga, Yosam; Muyingo, Mark; Korn, Abner
2013-07-01
To study the feasibility and acceptability of using video Internet communication to teach and evaluate surgical skills in a low-resource setting. This case-controlled study used video Internet communication for surgical skills teaching and evaluation. We randomized intern physicians rotating in the Obstetrics and Gynecology Department at Mulago Hospital at Makerere University in Kampala, Uganda, to the control arm (usual practice) or intervention arm (three video teaching sessions with University of California, San Francisco faculty). We made preintervention and postintervention videos of all interns tying knots using a small video camera and uploaded the files to a file hosting service that offers cloud storage. A blinded faculty member graded all of the videos. Both groups completed a survey at the end of the study. We randomized 18 interns with complete data for eight in the intervention group and seven in the control group. We found score improvement of 50% or more in six of eight (75%) interns in the intervention group compared with one of seven (14%) in the control group (P=.04). Scores declined in five of the seven (71%) controls but in none in the intervention group. Both intervention and control groups used attendings, colleagues, and the Internet as sources for learning about knot-tying. The control group was less likely to practice knot-tying than the intervention group. The trainees and the instructors felt this method of training was enjoyable and helpful. Remote teaching in low-resource settings, where faculty time is limited and access to visiting faculty is sporadic, is feasible, effective, and well-accepted by both learner and teacher. II.
Movement Right from the Start: Physical Activity for Young Students
ERIC Educational Resources Information Center
Morgan, Deborah H.; Morgan, Don W.
2012-01-01
In today's technology-driven society, children often sit for hours in front of a screen (e.g., computer, TV, video game), exercising only their fingers as they manipulate the keyboard, remote control, or game controller. This sedentary lifestyle contributes to the growing problem of childhood obesity. Data from the U.S. Centers for Disease Control…
Design and Evaluation of an Integrated Online Motion Control Training Package
ERIC Educational Resources Information Center
Buiu, C.
2009-01-01
The aim of this paper is to present an integrated Internet-based package for teaching the fundamentals of motion control by using a wide range of resources: theory, videos, simulators, games, quizzes, and a remote lab. The package is aimed at automation technicians, pupils at vocational schools and students taking an introductory course in…
SATCOM Supply Versus Demand and the Impact on Remotely Piloted Aircraft ISR
2016-03-01
produced a laser transmitter called OPALS , which successfully transmitted both text and video from the International Space Station to a ground control...station. In one test, a video which took 12 hours to upload via traditional radio frequency was downloaded in a mere seven seconds using OPALS .52 ESA...enhanced Ku-band IntelsatEpic satellites to be launched in 2016 will provide 200Mbps downlink data rate, while OPALS and EDRS provide 1.8Gbps downlink
Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard
2009-08-01
To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.
Deployable reconnaissance from a VTOL UAS in urban environments
NASA Astrophysics Data System (ADS)
Barnett, Shane; Bird, John; Culhane, Andrew; Sharkasi, Adam; Reinholtz, Charles
2007-04-01
Reconnaissance collection in unknown or hostile environments can be a dangerous and life threatening task. To reduce this risk, the Unmanned Systems Group at Virginia Tech has produced a fully autonomous reconnaissance system able to provide live video reconnaissance from outside and inside unknown structures. This system consists of an autonomous helicopter which launches a small reconnaissance pod inside a building and an operator control unit (OCU) on a ground station. The helicopter is a modified Bergen Industrial Twin using a Rotomotion flight controller and can fly missions of up to one half hour. The mission planning OCU can control the helicopter remotely through teleoperation or fully autonomously by GPS waypoints. A forward facing camera and template matching aid in navigation by identifying the target building. Once the target structure is identified, vision algorithms will center the UAS adjacent to open windows or doorways. Tunable parameters in the vision algorithm account for varying launch distances and opening sizes. Launch of the reconnaissance pod may be initiated remotely through a human in the loop or autonomously. Compressed air propels the half pound stationary pod or the larger mobile pod into the open portals. Once inside the building, the reconnaissance pod will then transmit live video back to the helicopter. The helicopter acts as a repeater node for increased video range and simplification of communication back to the ground station.
Laniel, Sebastien; Letourneau, Dominic; Labbe, Mathieu; Grondin, Francois; Polgar, Janice; Michaud, Francois
2017-07-01
A telepresence mobile robot is a remote-controlled, wheeled device with wireless internet connectivity for bidirectional audio, video and data transmission. In health care, a telepresence robot could be used to have a clinician or a caregiver assist seniors in their homes without having to travel to these locations. Many mobile telepresence robotic platforms have recently been introduced on the market, bringing mobility to telecommunication and vital sign monitoring at reasonable costs. What is missing for making them effective remote telepresence systems for home care assistance are capabilities specifically needed to assist the remote operator in controlling the robot and perceiving the environment through the robot's sensors or, in other words, minimizing cognitive load and maximizing situation awareness. This paper describes our approach adding navigation, artificial audition and vital sign monitoring capabilities to a commercially available telepresence mobile robot. This requires the use of a robot control architecture to integrate the autonomous and teleoperation capabilities of the platform.
NASA Lewis' Telescience Support Center Supports Orbiting Microgravity Experiments
NASA Technical Reports Server (NTRS)
Hawersaat, Bob W.
1998-01-01
The Telescience Support Center (TSC) at the NASA Lewis Research Center was developed to enable Lewis-based science teams and principal investigators to monitor and control experimental and operational payloads onboard the International Space Station. The TSC is a remote operations hub that can interface with other remote facilities, such as universities and industrial laboratories. As a pathfinder for International Space Station telescience operations, the TSC has incrementally developed an operational capability by supporting space shuttle missions. The TSC has evolved into an environment where experimenters and scientists can control and monitor the health and status of their experiments in near real time. Remote operations (or telescience) allow local scientists and their experiment teams to minimize their travel and maintain a local complement of expertise for hardware and software troubleshooting and data analysis. The TSC was designed, developed, and is operated by Lewis' Engineering and Technical Services Directorate and its support contractors, Analex Corporation and White's Information System, Inc. It is managed by Lewis' Microgravity Science Division. The TSC provides operational support in conjunction with the NASA Marshall Space Flight Center and NASA Johnson Space Center. It enables its customers to command, receive, and view telemetry; monitor the science video from their on-orbit experiments; and communicate over mission-support voice loops. Data can be received and routed to experimenter-supplied ground support equipment and/or to the TSC data system for display. Video teleconferencing capability and other video sources, such as NASA TV, are also available. The TSC has a full complement of standard services to aid experimenters in telemetry operations.
New information technology tools for a medical command system for mass decontamination.
Fuse, Akira; Okumura, Tetsu; Hagiwara, Jun; Tanabe, Tomohide; Fukuda, Reo; Masuno, Tomohiko; Mimura, Seiji; Yamamoto, Kaname; Yokota, Hiroyuki
2013-06-01
In a mass decontamination during a nuclear, biological, or chemical (NBC) response, the capability to command, control, and communicate is crucial for the proper flow of casualties at the scene and their subsequent evacuation to definitive medical facilities. Information Technology (IT) tools can be used to strengthen medical control, command, and communication during such a response. Novel IT tools comprise a vehicle-based, remote video camera and communication network systems. During an on-site verification event, an image from a remote video camera system attached to the personal protective garment of a medical responder working in the warm zone was transmitted to the on-site Medical Commander for aid in decision making. Similarly, a communication network system was used for personnel at the following points: (1) the on-site Medical Headquarters; (2) the decontamination hot zone; (3) an on-site coordination office; and (4) a remote medical headquarters of a local government office. A specially equipped, dedicated vehicle was used for the on-site medical headquarters, and facilitated the coordination with other agencies. The use of these IT tools proved effective in assisting with the medical command and control of medical resources and patient transport decisions during a mass-decontamination exercise, but improvements are required to overcome transmission delays and camera direction settings, as well as network limitations in certain areas.
A Macintosh-Based Scientific Images Video Analysis System
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Friedland, Peter (Technical Monitor)
1994-01-01
A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, D.W.; Johnston, W.E.; Hall, D.E.
1990-03-01
We describe the use of the Sun Remote Procedure Call and Unix socket interprocess communication mechanisms to provide the network transport for a distributed, client-server based, image handling system. Clients run under Unix or UNICOS and servers run under Unix or MS-DOS. The use of remote procedure calls across local or wide-area networks to make video movies is addressed.
Professional Development for Rural and Remote Teachers Using Video Conferencing
ERIC Educational Resources Information Center
Maher, Damian; Prescott, Anne
2017-01-01
Teachers in rural and remote schools face many challenges including those relating to distance, isolation and lack of professional development opportunities. This article examines a project where mathematics and science teachers were provided with professional development opportunities via video conferencing to help them use syllabus documents to…
Preliminary experience with a stereoscopic video system in a remotely piloted aircraft application
NASA Technical Reports Server (NTRS)
Rezek, T. W.
1983-01-01
Remote piloting video display development at the Dryden Flight Research Facility of NASA's Ames Research Center is summarized, and the reasons for considering stereo television are presented. Pertinent equipment is described. Limited flight experience is also discussed, along with recommendations for further study.
Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI.
Stawicki, Piotr; Gembler, Felix; Volosyak, Ivan
2016-01-01
Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system.
Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI
2016-01-01
Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car (MRC) was introduced and a four-class BCI graphical user interface (with live video feedback and stimulation boxes on the same screen) for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment (mean time 207.08 s, SD 50.25) with a mean (SD) accuracy and ITR of 93.03% (5.73) and 14.07 bits/min (4.44), respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system. PMID:27528864
Hand held phase-shifting diffraction Moire interferometer
Deason, V.A.; Ward, M.B.
1994-09-20
An interferometer is described in which a coherent beam of light is generated within a remote case and transmitted to a hand held unit tethered to said remote case, said hand held unit having optical elements for directing a pair of mutually coherent collimated laser beams at a diffraction grating. Data from the secondary or diffracted beams are then transmitted to a separate video and data acquisition system for recording and analysis for load induced deformation or for identification purposes. Means are also provided for shifting the phase of one incident beam relative to the other incident beam and being controlled from within said remote case. 4 figs.
Real-time WebRTC-based design for a telepresence wheelchair.
Van Kha Ly Ha; Rifai Chai; Nguyen, Hung T
2017-07-01
This paper presents a novel approach to the telepresence wheelchair system which is capable of real-time video communication and remote interaction. The investigation of this emerging technology aims at providing a low-cost and efficient way for assisted-living of people with disabilities. The proposed system has been designed and developed by deploying the JavaScript with Hyper Text Markup Language 5 (HTML5) and Web Real-time Communication (WebRTC) in which the adaptive rate control algorithm for video transmission is invoked. We conducted experiments in real-world environments, and the wheelchair was controlled from a distance using the Internet browser to compare with existing methods. The results show that the adaptively encoded video streaming rate matches the available bandwidth. The video streaming is high-quality with approximately 30 frames per second (fps) and round trip time less than 20 milliseconds (ms). These performance results confirm that the WebRTC approach is a potential method for developing a telepresence wheelchair system.
Development of a stereofluoroscopy system
NASA Technical Reports Server (NTRS)
Rivers, D. B.
1979-01-01
A technique of 3-D video imaging, was developed for use on manned missions for observation and control of remote manipulators. An improved medical diagnostic fluoroscope with a stereo, real-time output was also developed. An explanation of how this system works, and recommendations for future work in this area are presented.
Video PATSEARCH: A Mixed-Media System.
ERIC Educational Resources Information Center
Schulman, Jacque-Lynne
1982-01-01
Describes a videodisc-based information display system in which a computer terminal is used to search the online PATSEARCH database from a remote host with local microcomputer control to select and display drawings from the retrieved records. System features and system components are discussed and criteria for system evaluation are presented.…
VIS: Technology for Multicultural Teacher Education.
ERIC Educational Resources Information Center
Bruning, Merribeth J.
1992-01-01
Video Information Systems (VIS) is fiber optics network that connects campus classrooms to VIS central library. Remotely controlled by instructors, VIS incorporates use of number of audiovisual materials and can be used in cross-cultural training in which visual aids assist in showing cultural differences. VIS assists in education of future…
Remote Video Monitor of Vehicles in Cooperative Information Platform
NASA Astrophysics Data System (ADS)
Qin, Guofeng; Wang, Xiaoguo; Wang, Li; Li, Yang; Li, Qiyan
Detection of vehicles plays an important role in the area of the modern intelligent traffic management. And the pattern recognition is a hot issue in the area of computer vision. An auto- recognition system in cooperative information platform is studied. In the cooperative platform, 3G wireless network, including GPS, GPRS (CDMA), Internet (Intranet), remote video monitor and M-DMB networks are integrated. The remote video information can be taken from the terminals and sent to the cooperative platform, then detected by the auto-recognition system. The images are pretreated and segmented, including feature extraction, template matching and pattern recognition. The system identifies different models and gets vehicular traffic statistics. Finally, the implementation of the system is introduced.
Telerobot local-remote control architecture for space flight program applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul; Steele, Robert; Long, Mark; Bon, Bruce; Beahan, John
1993-01-01
The JPL Supervisory Telerobotics (STELER) Laboratory has developed and demonstrated a unique local-remote robot control architecture which enables management of intermittent communication bus latencies and delays such as those expected for ground-remote operation of Space Station robotic systems via the Tracking and Data Relay Satellite System (TDRSS) communication platform. The current work at JPL in this area has focused on enhancing the technologies and transferring the control architecture to hardware and software environments which are more compatible with projected ground and space operational environments. At the local site, the operator updates the remote worksite model using stereo video and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. This capability runs on a single Silicon Graphics Inc. machine. The operator can employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the intended object. The remote site controller, called the Modular Telerobot Task Execution System (MOTES), runs in a multi-processor VME environment and performs the task sequencing, task execution, trajectory generation, closed loop force/torque control, task parameter monitoring, and reflex action. This paper describes the new STELER architecture implementation, and also documents the results of the recent autonomous docking task execution using the local site and MOTES.
Remote Video Supervision in Adapted Physical Education
ERIC Educational Resources Information Center
Kelly, Luke; Bishop, Jason
2013-01-01
Supervision for beginning adapted physical education (APE) teachers and inservice general physical education teachers who are learning to work with students with disabilities poses a number of challenges. The purpose of this article is to describe a project aimed at developing a remote video system that could be used by a university supervisor to…
Formal Verification of a Power Controller Using the Real-Time Model Checker UPPAAL
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Larsen, Kim Guldstrand; Skou, Arne
1999-01-01
A real-time system for power-down control in audio/video components is modeled and verified using the real-time model checker UPPAAL. The system is supposed to reside in an audio/video component and control (read from and write to) links to neighbor audio/video components such as TV, VCR and remote-control. In particular, the system is responsible for the powering up and down of the component in between the arrival of data, and in order to do so in a safe way without loss of data, it is essential that no link interrupts are lost. Hence, a component system is a multitasking system with hard real-time requirements, and we present techniques for modeling time consumption in such a multitasked, prioritized system. The work has been carried out in a collaboration between Aalborg University and the audio/video company B&O. By modeling the system, 3 design errors were identified and corrected, and the following verification confirmed the validity of the design but also revealed the necessity for an upper limit of the interrupt frequency. The resulting design has been implemented and it is going to be incorporated as part of a new product line.
Improved Techniques for Video Compression and Communication
ERIC Educational Resources Information Center
Chen, Haoming
2016-01-01
Video compression and communication has been an important field over the past decades and critical for many applications, e.g., video on demand, video-conferencing, and remote education. In many applications, providing low-delay and error-resilient video transmission and increasing the coding efficiency are two major challenges. Low-delay and…
Shenai, Mahesh B; Tubbs, R Shane; Guthrie, Barton L; Cohen-Gadol, Aaron A
2014-08-01
The shortage of surgeons compels the development of novel technologies that geographically extend the capabilities of individual surgeons and enhance surgical skills. The authors have developed "Virtual Interactive Presence" (VIP), a platform that allows remote participants to simultaneously view each other's visual field, creating a shared field of view for real-time surgical telecollaboration. The authors demonstrate the capability of VIP to facilitate long-distance telecollaboration during cadaveric dissection. Virtual Interactive Presence consists of local and remote workstations with integrated video capture devices and video displays. Each workstation mutually connects via commercial teleconferencing devices, allowing worldwide point-to-point communication. Software composites the local and remote video feeds, displaying a hybrid perspective to each participant. For demonstration, local and remote VIP stations were situated in Indianapolis, Indiana, and Birmingham, Alabama, respectively. A suboccipital craniotomy and microsurgical dissection of the pineal region was performed in a cadaveric specimen using VIP. Task and system performance were subjectively evaluated, while additional video analysis was used for objective assessment of delay and resolution. Participants at both stations were able to visually and verbally interact while identifying anatomical structures, guiding surgical maneuvers, and discussing overall surgical strategy. Video analysis of 3 separate video clips yielded a mean compositing delay of 760 ± 606 msec (when compared with the audio signal). Image resolution was adequate to visualize complex intracranial anatomy and provide interactive guidance. Virtual Interactive Presence is a feasible paradigm for real-time, long-distance surgical telecollaboration. Delay, resolution, scaling, and registration are parameters that require further optimization, but are within the realm of current technology. The paradigm potentially enables remotely located experts to mentor less experienced personnel located at the surgical site with applications in surgical training programs, remote proctoring for proficiency, and expert support for rural settings and across different counties.
Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook
2014-01-01
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data. PMID:25225874
Mehmood, Irfan; Sajjad, Muhammad; Baik, Sung Wook
2014-09-15
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.
The 3D Human Motion Control Through Refined Video Gesture Annotation
NASA Astrophysics Data System (ADS)
Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.
In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.
Development and testing for physical security robots
NASA Astrophysics Data System (ADS)
Carroll, Daniel M.; Nguyen, Chinh; Everett, H. R.; Frederick, Brian
2005-05-01
The Mobile Detection Assessment Response System (MDARS) provides physical security for Department of Defense bases and depots using autonomous unmanned ground vehicles (UGVs) to patrol the site while operating payloads for intruder detection and assessment, barrier assessment, and product assessment. MDARS is in the System Development and Demonstration acquisition phase and is currently undergoing developmental testing including an Early User Appraisal (EUA) at the Hawthorne Army Depot, Nevada-the world's largest army depot. The Multiple Resource Host Architecture (MRHA) allows the human guard force to command and control several MDARS platforms simultaneously. The MRHA graphically displays video, map, and status for each resource using wireless digital communications for integrated data, video, and audio. Events are prioritized and the user is prompted with audio alerts and text instructions for alarms and warnings. The MRHA also interfaces to remote resources to automate legacy physical devices such as fence gate controls, garage doors, and remote power on/off capability for the MDARS patrol units. This paper provides an overview and history of the MDARS program and control station software with details on the installation and operation at Hawthorne Army Depot, including discussions on scenarios for EUA excursions. Special attention is given to the MDARS technical development strategy for spiral evolutions.
[Communication subsystem design of tele-screening system for diabetic retinopathy].
Chen, Jian; Pan, Lin; Zheng, Shaohua; Yu, Lun
2013-12-01
A design scheme of a tele-screening system for diabetic retinopathy (DR) has been proposed, especially the communication subsystem. The scheme uses serial communication module consisting of ARM 7 microcontroller and relays to connect remote computer and fundus camera, and also uses C++ programming language based on MFC to design the communication software consisting of therapy and diagnostic information module, video/audio surveillance module and fundus camera control module. The scheme possesses universal property in some remote medical treatment systems which are similar to the system.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-06
... FEDERAL COMMUNICATIONS COMMISSION [MB Docket No. 12-230; DA 12-1347] Media Bureau Seeks Comment on TiVo's Request for Clarification and Waiver of the Commission's Audiovisual Output Requirement AGENCY... provides for audiovisual communications including service discovery, video transport, and remote control...
NASA Astrophysics Data System (ADS)
Urias, Adrian R.; Draghic, Nicole; Lui, Janet; Cho, Angie; Curtis, Calvin; Espinosa, Joseluis; Wottawa, Christopher; Wiesmann, William P.; Schwamm, Lee H.
2005-04-01
Stroke remains the third most frequent cause of death in the United States and the leading cause of disability in adults. Long-term effects of ischemic stroke can be mitigated by the opportune administration of Tissue Plasminogen Activator (t-PA); however, the decision regarding the appropriate use of this therapy is dependant on timely, effective neurological assessment by a trained specialist. The lack of available stroke expertise is a key barrier preventing frequent use of t-PA. We report here on the development of a prototype research system capable of performing a semi-automated neurological examination from an offsite location via the Internet and a Computed Tomography (CT) scanner to facilitate the diagnosis and treatment of acute stroke. The Video Stroke Assessment (VSA) System consists of a video camera, a camera mounting frame, and a computer with software and algorithms to collect, interpret, and store patient neurological responses to stimuli. The video camera is mounted on a mobility track in front of the patient; camera direction and zoom are remotely controlled on a graphical user interface (GUI) by the specialist. The VSA System also performs a partially-autonomous examination based on the NIH Stroke Scale (NIHSS). Various response data indicative of stroke are recorded, analyzed and transmitted in real time to the specialist. The VSA provides unbiased, quantitative results for most categories of the NIHSS along with video and audio playback to assist in accurate diagnosis. The system archives the complete exam and results.
Photo-acoustic and video-acoustic methods for sensing distant sound sources
NASA Astrophysics Data System (ADS)
Slater, Dan; Kozacik, Stephen; Kelmelis, Eric
2017-05-01
Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects. Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed. RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing readout device that recovers the distant audio. These two elements are passively coupled over long distances at the speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel and acoustic beam forming are all possible using RAS techniques and when combined with high-definition video imagery it can help to provide a more cinema like immersive viewing experience. A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time image processing software environment provides many of the needed capabilities for researching video-acoustic signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has advantages.
Telepresence in neurosurgery: the integrated remote neurosurgical system.
Kassell, N F; Downs, J H; Graves, B S
1997-01-01
This paper describes the Integrated Remote Neurosurgical System (IRNS), a remotely-operated neurosurgical microscope with high-speed communications and a surgeon-accessible user interface. The IRNS will allow high quality bidirectional mentoring in the neurosurgical suite. The research goals of this effort are twofold: to develop a clinical system allowing a remote neurosurgeon to lend expertise to the OR-based neurosurgical team and to provide an integrated training environment. The IRNS incorporates a generic microscope/transport model, Called SuMIT (Surgical Manipulator Interface Translator). Our system is currently under test using the Zeiss MKM surgical transport. A SuMIT interface is also being constructed for the Robotics Research 1607. The IRNS Remote Planning and Navigation Workstation incorporates surgical planning capabilities, real-time, 30 fps video from the microscope and overhead video camera. The remote workstation includes a force reflecting handcontroller which gives the remote surgeon an intuitive way to position the microscope head. Bidirectional audio, video whiteboarding, and image archiving are also supported by the remote workstation. A simulation mode permits pre-surgical simulation, post-surgical critique, and training for surgeons without access to an actual microscope transport system. The components of the IRNS are integrated using ATM switching to provide low latency data transfer. The research, along with the more sophisticated systems that will follow, will serve as a foundation and test-bed for extending the surgeon's skills without regard to time zone or geographic boundaries.
Nintendo related injuries and other problems: review.
Jalink, Maarten B; Heineman, Erik; Pierie, Jean-Pierre E N; ten Cate Hoedemaker, Henk O
2014-12-16
To identify all reported cases of injury and other problems caused by using a Nintendo video gaming system. Review. Search of PubMed and Embase in June 2014 for reports on injuries and other problems caused by using a Nintendo gaming system. Most of the 38 articles identified were case reports or case series. Injuries and problems ranged from neurological and psychological to surgical. Traditional controllers with buttons were associated with tendinitis of the extensor of the thumb. The joystick on the Nintendo 64 controller was linked to palmar ulceration. The motion sensitive Wii remote was associated with musculoskeletal problems and various traumas. Most problems are mild and prevalence is low. The described injuries were related to the way the games are controlled, which varies according to the video game console. © Jalink et al 2014.
Improvement of Hungarian Joint Terminal Attack Program
2013-06-13
LST Laser Spot Tracker NVG Night Vision Goggle ROMAD Radio Operator Maintainer and Driver ROVER Remotely Operated Video Enhanced Receiver TACP...visual target designation. The other component consists of a laser spot tracker (LST), which identifies targets by tracking laser energy reflecting...capability for every type of night time missions, laser spot tracker for laser spot search missions, remotely operated video enhanced receiver
Is This Real Life? Is This Just Fantasy?: Realism and Representations in Learning with Technology
NASA Astrophysics Data System (ADS)
Sauter, Megan Patrice
Students often engage in hands-on activities during science learning; however, financial and practical constraints often limit the availability of these activities. Recent advances in technology have led to increases in the use of simulations and remote labs, which attempt to recreate hands-on science learning via computer. Remote labs and simulations are interesting from a cognitive perspective because they allow for different relations between representations and their referents. Remote labs are unique in that they provide a yoked representation, meaning that the representation of the lab on the computer screen is actually linked to that which it represents: a real scientific device. Simulations merely represent the lab and are not connected to any real scientific devices. However, the type of visual representations used in the lab may modify the effects of the lab technology. The purpose of this dissertation is to examine the relation between representation and technology and its effects of students' psychological experiences using online science labs. Undergraduates participated in two studies that investigated the relation between technology and representation. In the first study, participants performed either a remote lab or a simulation incorporating one of two visual representations, either a static image or a video of the equipment. Although participants in both lab conditions learned, participants in the remote lab condition had more authentic experiences. However, effects were moderated by the realism of the visual representation. Participants who saw a video were more invested and felt the experience was more authentic. In a second study, participants performed a remote lab and either saw the same video as in the first study, an animation, or the video and an animation. Most participants had an authentic experience because both representations evoked strong feelings of presence. However, participants who saw the video were more likely to believe the remote technology was real. Overall, the findings suggest that participants' experiences with technology were shaped by representation. Students had more authentic experiences using the remote lab than the simulation. However, incorporating visual representations that enhance presence made these experiences even more authentic and meaningful than afforded by the technology alone.
Armellino, Donna; Hussain, Erfan; Schilling, Mary Ellen; Senicola, William; Eichorn, Ann; Dlugacz, Yosef; Farber, Bruce F
2012-01-01
Hand hygiene is a key measure in preventing infections. We evaluated healthcare worker (HCW) hand hygiene with the use of remote video auditing with and without feedback. The study was conducted in an 17-bed intensive care unit from June 2008 through June 2010. We placed cameras with views of every sink and hand sanitizer dispenser to record hand hygiene of HCWs. Sensors in doorways identified when an individual(s) entered/exited. When video auditors observed a HCW performing hand hygiene upon entering/exiting, they assigned a pass; if not, a fail was assigned. Hand hygiene was measured during a 16-week period of remote video auditing without feedback and a 91-week period with feedback of data. Performance feedback was continuously displayed on electronic boards mounted within the hallways, and summary reports were delivered to supervisors by electronic mail. During the 16-week prefeedback period, hand hygiene rates were less than 10% (3933/60 542) and in the 16-week postfeedback period it was 81.6% (59 627/73 080). The increase was maintained through 75 weeks at 87.9% (262 826/298 860). The data suggest that remote video auditing combined with feedback produced a significant and sustained improvement in hand hygiene.
Video monitoring system for car seat
NASA Technical Reports Server (NTRS)
Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)
2004-01-01
A video monitoring system for use with a child car seat has video camera(s) mounted in the car seat. The video images are wirelessly transmitted to a remote receiver/display encased in a portable housing that can be removably mounted in the vehicle in which the car seat is installed.
Using remote underwater video to estimate freshwater fish species richness.
Ebner, B C; Morgan, D L
2013-05-01
Species richness records from replicated deployments of baited remote underwater video stations (BRUVS) and unbaited remote underwater video stations (UBRUVS) in shallow (<1 m) and deep (>1 m) water were compared with those obtained from using fyke nets, gillnets and beach seines. Maximum species richness (14 species) was achieved through a combination of conventional netting and camera-based techniques. Chanos chanos was the only species not recorded on camera, whereas Lutjanus argentimaculatus, Selenotoca multifasciata and Gerres filamentosus were recorded on camera in all three waterholes but were not detected by netting. BRUVSs and UBRUVSs provided versatile techniques that were effective at a range of depths and microhabitats. It is concluded that cameras warrant application in aquatic areas of high conservation value with high visibility. Non-extractive video methods are particularly desirable where threatened species are a focus of monitoring or might be encountered as by-catch in net meshes. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.
Effect of a novel video game on stroke knowledge of 9- to 10-year-old, low-income children.
Williams, Olajide; Hecht, Mindy F; DeSorbo, Alexandra L; Huq, Saima; Noble, James M
2014-03-01
Improving actionable stroke knowledge of a witness or bystander, which in some cases are children, may improve response to an acute stroke event. We used a quasiexperimental pre-test post-test design to evaluate actionable stroke knowledge of 210 children aged 9 to 10 years in response to a single, 15-minute exposure to a stroke education video game conducted in the school computer laboratory. After immediate post-test, we provided remote password-protected online video game access and encouraged children to play at their leisure from home. An unannounced delayed post-test occurred 7 weeks later. Two hundred ten children completed pretest, 205 completed immediate post-test, whereas 198 completed delayed post-test. One hundred fifty-six (74%) children had Internet access at home, and 41 (26%), mostly girls, played the video game remotely. There was significant improvement in stroke symptom composite scores, calling 911, and all individual stroke knowledge items, including a distractor across the testing sequence (P<0.05). Children who played the video game remotely demonstrated significant improvement in knowledge of 1 symptom (sudden imbalance) compared with children who did not (P<0.05), although overall composite scores showed no difference. Stroke education video games may represent novel means for improving and sustaining actionable stroke knowledge of children.
The utilization of video-conference shared medical appointments in rural diabetes care.
Tokuda, Lisa; Lorenzo, Lenora; Theriault, Andre; Taveira, Tracey H; Marquis, Lynn; Head, Helene; Edelman, David; Kirsh, Susan R; Aron, David C; Wu, Wen-Chih
2016-09-01
To explore whether Video-Shared Medical Appointments (video-SMA), where group education and medication titration were provided remotely through video-conferencing technology would improve diabetes outcomes in remote rural settings. We conducted a pilot where a team of a clinical pharmacist and a nurse practitioner from Honolulu VA hospital remotely delivered video-SMA in diabetes to Guam. Patients with diabetes and HbA1c ≥7% were enrolled into the study during 2013-2014. Six groups of 4-6 subjects attended 4 weekly sessions, followed by 2 bi-monthly booster video-SMA sessions for 5 months. Patients with HbA1c ≥7% that had primary care visits during the study period but not referred/recruited for video-SMA were selected as usual-care comparators. We compared changes from baseline in HbA1c, blood-pressure, and lipid levels using mixed-effect modeling between video-SMA and usual care groups. We also analyzed emergency department (ED) visits and hospitalizations. Focus groups were conducted to understand patient's perceptions. Thirty-one patients received video-SMA and charts of 69 subjects were abstracted as usual-care. After 5 months, there was a significant decline in HbA1c in video-SMA vs. usual-care (9.1±1.9 to 8.3±1.8 vs. 8.6±1.4 to 8.7±1.6, P=0.03). No significant change in blood-pressure or lipid levels was found between the groups. Patients in the video-SMA group had significantly lower rates of ED visits (3.2% vs. 17.4%, P=0.01) than usual-care but similar hospitalization rates. Focus groups suggested patient satisfaction with video-SMA and increase in self-efficacy in diabetes self-care. Video-SMA is feasible, well-perceived and has the potential to improve diabetes outcomes in a rural setting. Published by Elsevier Ireland Ltd.
Wrap-Around Out-the-Window Sensor Fusion System
NASA Technical Reports Server (NTRS)
Fox, Jeffrey; Boe, Eric A.; Delgado, Francisco; Secor, James B.; Clark, Michael R.; Ehlinger, Kevin D.; Abernathy, Michael F.
2009-01-01
The Advanced Cockpit Evaluation System (ACES) includes communication, computing, and display subsystems, mounted in a van, that synthesize out-the-window views to approximate the views of the outside world as it would be seen from the cockpit of a crewed spacecraft, aircraft, or remote control of a ground vehicle or UAV (unmanned aerial vehicle). The system includes five flat-panel display units arranged approximately in a semicircle around an operator, like cockpit windows. The scene displayed on each panel represents the view through the corresponding cockpit window. Each display unit is driven by a personal computer equipped with a video-capture card that accepts live input from any of a variety of sensors (typically, visible and/or infrared video cameras). Software running in the computers blends the live video images with synthetic images that could be generated, for example, from heads-up-display outputs, waypoints, corridors, or from satellite photographs of the same geographic region. Data from a Global Positioning System receiver and an inertial navigation system aboard the remote vehicle are used by the ACES software to keep the synthetic and live views in registration. If the live image were to fail, the synthetic scenes could still be displayed to maintain situational awareness.
Heart rate measurement based on face video sequence
NASA Astrophysics Data System (ADS)
Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian
2015-03-01
This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.
Wiimote Experiments: Circular Motion
ERIC Educational Resources Information Center
Kouh, Minjoon; Holz, Danielle; Kawam, Alae; Lamont, Mary
2013-01-01
The advent of new sensor technologies can provide new ways of exploring fundamental physics. In this paper, we show how a Wiimote, which is a handheld remote controller for the Nintendo Wii video game system with an accelerometer, can be used to study the dynamics of circular motion with a very simple setup such as an old record player or a…
Wiimote Experiments: Circular Motion
NASA Astrophysics Data System (ADS)
Kouh, Minjoon; Holz, Danielle; Kawam, Alae; Lamont, Mary
2013-03-01
The advent of new sensor technologies can provide new ways of exploring fundamental physics. In this paper, we show how a Wiimote, which is a handheld remote controller for the Nintendo Wii video game system with an accelerometer, can be used to study the dynamics of circular motion with a very simple setup such as an old record player or a bicycle wheel.
GOOSE CAM: The Development of a Practical Underwater Exploration Platform
ERIC Educational Resources Information Center
Miller, William R.; Mitchell, Colleen; Miller, Jeffrey D.
2009-01-01
We challenged an Aquatic Biology class to find a way to access, observe, and record aquatic habitats and organisms without causing disruption. Using off the shelf components the class was guided in the design and assembly of a remote controlled, video broadcasting, data collecting, floating vehicle based on a molded goose decoy. GOOSE-CAM or…
Use of remote video auditing to validate Ebola level II personal protective equipment competency.
Allar, Peter J; Frank-Cooper, Madalyn
2015-06-01
Faced with an Ebola-related mandate to regularly train frontline hospital staff with the donning and doffing of personal protective equipment, a community hospital's emergency department implemented remote video auditing (RVA) to assist in the training and remediation of its nursing staff. RVA was found to be useful in assessing performance and facilitating remediation. Copyright 2015, SLACK Incorporated.
Technologies and Techniques for Supporting Facilitated Video
ERIC Educational Resources Information Center
Linnell, Natalie
2011-01-01
Worldwide, demand for education of all kinds is increasing beyond the capacity to provide it. One approach that shows potential for addressing this demand is facilitated video. In facilitated video, an educator is recorded teaching, and that video is sent to a remote site where it is shown to students by a facilitator who creates interaction…
Remote driving with reduced bandwidth communication
NASA Technical Reports Server (NTRS)
Depiero, Frederick W.; Noell, Timothy E.; Gee, Timothy F.
1993-01-01
Oak Ridge National Laboratory has developed a real-time video transmission system for low bandwidth remote operations. The system supports both continuous transmission of video for remote driving and progressive transmission of still images. Inherent in the system design is a spatiotemporal limitation to the effects of channel errors. The average data rate of the system is 64,000 bits/s, a compression of approximately 1000:1 for the black and white National Television Standard Code video. The image quality of the transmissions is maintained at a level that supports teleoperation of a high mobility multipurpose wheeled vehicle at speeds up to 15 mph on a moguled dirt track. Video compression is achieved by using Laplacian image pyramids and a combination of classical techniques. Certain subbands of the image pyramid are transmitted by using interframe differencing with a periodic refresh to aid in bandwidth reduction. Images are also foveated to concentrate image detail in a steerable region. The system supports dynamic video quality adjustments between frame rate, image detail, and foveation rate. A typical configuration for the system used during driving has a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of less than 1s.
Remotely accessible laboratory for MEMS testing
NASA Astrophysics Data System (ADS)
Sivakumar, Ganapathy; Mulsow, Matthew; Melinger, Aaron; Lacouture, Shelby; Dallas, Tim E.
2010-02-01
We report on the construction of a remotely accessible and interactive laboratory for testing microdevices (aka: MicroElectroMechancial Systems - MEMS). Enabling expanded utilization of microdevices for research, commercial, and educational purposes is very important for driving the creation of future MEMS devices and applications. Unfortunately, the relatively high costs associated with MEMS devices and testing infrastructure makes widespread access to the world of MEMS difficult. The creation of a virtual lab to control and actuate MEMS devices over the internet helps spread knowledge to a larger audience. A host laboratory has been established that contains a digital microscope, microdevices, controllers, and computers that can be logged into through the internet. The overall layout of the tele-operated MEMS laboratory system can be divided into two major parts: the server side and the client side. The server-side is present at Texas Tech University, and hosts a server machine that runs the Linux operating system and is used for interfacing the MEMS lab with the outside world via internet. The controls from the clients are transferred to the lab side through the server interface. The server interacts with the electronics required to drive the MEMS devices using a range of National Instruments hardware and LabView Virtual Instruments. An optical microscope (100 ×) with a CCD video camera is used to capture images of the operating MEMS. The server broadcasts the live video stream over the internet to the clients through the website. When the button is pressed on the website, the MEMS device responds and the video stream shows the movement in close to real time.
Remote environmental sensor array system
NASA Astrophysics Data System (ADS)
Hall, Geoffrey G.
This thesis examines the creation of an environmental monitoring system for inhospitable environments. It has been named The Remote Environmental Sensor Array System or RESA System for short. This thesis covers the development of RESA from its inception, to the design and modeling of the hardware and software required to make it functional. Finally, the actual manufacture, and laboratory testing of the finished RESA product is discussed and documented. The RESA System is designed as a cost-effective way to bring sensors and video systems to the underwater environment. It contains as water quality probe with sensors such as dissolved oxygen, pH, temperature, specific conductivity, oxidation-reduction potential and chlorophyll a. In addition, an omni-directional hydrophone is included to detect underwater acoustic signals. It has a colour, high-definition and a low-light, black and white camera system, which it turn are coupled to a laser scaling system. Both high-intensity discharge and halogen lighting system are included to illuminate the video images. The video and laser scaling systems are manoeuvred using pan and tilt units controlled from an underwater computer box. Finally, a sediment profile imager is included to enable profile images of sediment layers to be acquired. A control and manipulation system to control the instruments and move the data across networks is integrated into the underwater system while a power distribution node provides the correct voltages to power the instruments. Laboratory testing was completed to ensure that the different instruments associated with the RESA performed as designed. This included physical testing of the motorized instruments, calibration of the instruments, benchmark performance testing and system failure exercises.
Remote Visualization and Remote Collaboration On Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).
Mobile Care (Moca) for Remote Diagnosis and Screening
Celi, Leo Anthony; Sarmenta, Luis; Rotberg, Jhonathan; Marcelo, Alvin; Clifford, Gari
2010-01-01
Moca is a cell phone-facilitated clinical information system to improve diagnostic, screening and therapeutic capabilities in remote resource-poor settings. The software allows transmission of any medical file, whether a photo, x-ray, audio or video file, through a cell phone to (1) a central server for archiving and incorporation into an electronic medical record (to facilitate longitudinal care, quality control, and data mining), and (2) a remote specialist for real-time decision support (to leverage expertise). The open source software is designed as an end-to-end clinical information system that seamlessly connects health care workers to medical professionals. It is integrated with OpenMRS, an existing open source medical records system commonly used in developing countries. PMID:21822397
Istepanian, R S H; Philip, N
2005-01-01
In this paper we describe some of the optimisation issues relevant to the requirements of high throughput of medical data and video streaming traffic in 3G wireless environments. In particular we present a challenging 3G mobile health care application that requires a demanding 3G medical data throughput. We also describe the 3G QoS requirement of mObile Tele-Echography ultra-Light rObot system (OTELO that is designed to provide seamless 3G connectivity for real-time ultrasound medical video streams and diagnosis from a remote site (robotic and patient station) manipulated by an expert side (specialists) that is controlling the robotic scanning operation and presenting a real-time feedback diagnosis using 3G wireless communication links.
A teleoperated system for remote site characterization
NASA Technical Reports Server (NTRS)
Sandness, Gerald A.; Richardson, Bradley S.; Pence, Jon
1994-01-01
The detection and characterization of buried objects and materials is an important step in the restoration of burial sites containing chemical and radioactive waste materials at Department of Energy (DOE) and Department of Defense (DOD) facilities. By performing these tasks with remotely controlled sensors, it is possible to obtain improved data quality and consistency as well as enhanced safety for on-site workers. Therefore, the DOE Office of Technology Development and the US Army Environmental Center have jointly supported the development of the Remote Characterization System (RCS). One of the main components of the RCS is a small remotely driven survey vehicle that can transport various combinations of geophysical and radiological sensors. Currently implemented sensors include ground-penetrating radar, magnetometers, an electromagnetic induction sensor, and a sodium iodide radiation detector. The survey vehicle was constructed predominantly of non-metallic materials to minimize its effect on the operation of its geophysical sensors. The system operator controls the vehicle from a remote, truck-mounted, base station. Video images are transmitted to the base station by a radio link to give the operator necessary visual information. Vehicle control commands, tracking information, and sensor data are transmitted between the survey vehicle and the base station by means of a radio ethernet link. Precise vehicle tracking coordinates are provided by a differential Global Positioning System (GPS).
Nintendo related injuries and other problems: review
Heineman, Erik; Pierie, Jean-Pierre E N; ten Cate Hoedemaker, Henk O
2014-01-01
Objective To identify all reported cases of injury and other problems caused by using a Nintendo video gaming system. Design Review. Data sources and review methods Search of PubMed and Embase in June 2014 for reports on injuries and other problems caused by using a Nintendo gaming system. Results Most of the 38 articles identified were case reports or case series. Injuries and problems ranged from neurological and psychological to surgical. Traditional controllers with buttons were associated with tendinitis of the extensor of the thumb. The joystick on the Nintendo 64 controller was linked to palmar ulceration. The motion sensitive Wii remote was associated with musculoskeletal problems and various traumas. Conclusions Most problems are mild and prevalence is low. The described injuries were related to the way the games are controlled, which varies according to the video game console. PMID:25515525
iPhone otoscopes: Currently available, but reliable for tele-otoscopy in the hands of parents?
Shah, Manan Udayan; Sohal, Maheep; Valdez, Tulio A; Grindle, Christopher R
2018-03-01
Tele-otoscopy has been validated for tympanostomy surveillance and remote diagnosis when images are recorded by trained professionals. The CellScope iPhone Otoscope is a device that may be used for tele-otoscopy and it enables parents to record their children's ear examinations and send the films for remote physician diagnosis. This study aims to determine the ability to diagnose, and the reliability of the diagnosis when utilizing video exams obtained by a parent versus video exams obtained by an otolaryngologist. Parents of children ages 17 years or younger attempted recordings of the tympanic membrane of their children with the CellScope after a video tutorial; a physician subsequently used the device to record the same ear. Recordings occurred prior to standard pediatric otolaryngology office evaluation. Later, a remote pediatric otolaryngologist attempted diagnosis solely based on the videos, blinded to whether the examination was filmed by a parent or physician. Interrater reliability between video diagnosis and original diagnosis on pneumatic otoscopy was measured, and objective tympanic membrane landmarks visualized on the films were recorded. Eighty ears were enrolled and recorded. There was low interrater agreement (k = 0.42) between diagnosis based on parent videos as compared with pneumatic otoscopy. There was high agreement (k = 0.71) between diagnosis based on physician videos and pneumatic otoscopy. Physician videos and parent videos had only slight agreement on objective landmarks identified (k = 0.087). iPhone otoscopy provides reliable tele-otoscopy images in when used by trained professionals but, currently, images obtained by parents are not suitable for use in diagnosis. Copyright © 2018 Elsevier B.V. All rights reserved.
Fatehi, Farhad; Martin-Khan, Melinda; Gray, Leonard C; Russell, Anthony W
2014-02-14
An estimated 366 million people are living with diabetes worldwide and it is predicted that its prevalence will increase to 552 million by 2030. Management of this disease and its complications is a challenge for many countries. Optimal glycaemic control is necessary to minimize complications, but less than 70% of diabetic patients achieve target levels of blood glucose, partly due to poor access to qualified health care providers. Telemedicine has the potential to improve access to health care, especially for rural and remote residents. Video teleconsultation, a real-time (or synchronous) mode of telemedicine, is gaining more popularity around the world through recent improvements in digital telecommunications. If video consultation is to be offered as an alternative to face-to-face consultation in diabetes assessment and management, then it is important to demonstrate that this can be achieved without loss of clinical fidelity. This paper describes the protocol of a randomised controlled trail for assessing the reliability of remote video consultation for people with diabetes. A total of 160 people with diabetes will be randomised into either a Telemedicine or a Reference group. Participants in the Reference group will receive two sequential face-to-face consultations whereas in the Telemedicine group one consultation will be conducted face-to-face and the other via videoconference. The primary outcome measure will be a change in the patient's medication. Secondary outcome measures will be findings in physical examination, detecting complications, and patient satisfaction. A difference of less than 20% in the aggregated level of agreement between the two study groups will be used to identify if videoconference is non-inferior to traditional mode of clinical care (face-to-face). Despite rapid growth in application of telemedicine in a variety of medical specialties, little is known about the reliability of videoconferencing for remote consultation of people with diabetes. Results of this proposed study will provide evidence of the reliability of specialist consultation offered by videoconference for people with diabetes. Australian New Zealand Clinical Trials Registry ACTRN12612000315819.
2014-01-01
Background An estimated 366 million people are living with diabetes worldwide and it is predicted that its prevalence will increase to 552 million by 2030. Management of this disease and its complications is a challenge for many countries. Optimal glycaemic control is necessary to minimize complications, but less than 70% of diabetic patients achieve target levels of blood glucose, partly due to poor access to qualified health care providers. Telemedicine has the potential to improve access to health care, especially for rural and remote residents. Video teleconsultation, a real-time (or synchronous) mode of telemedicine, is gaining more popularity around the world through recent improvements in digital telecommunications. If video consultation is to be offered as an alternative to face-to-face consultation in diabetes assessment and management, then it is important to demonstrate that this can be achieved without loss of clinical fidelity. This paper describes the protocol of a randomised controlled trail for assessing the reliability of remote video consultation for people with diabetes. Methods/Design A total of 160 people with diabetes will be randomised into either a Telemedicine or a Reference group. Participants in the Reference group will receive two sequential face-to-face consultations whereas in the Telemedicine group one consultation will be conducted face-to-face and the other via videoconference. The primary outcome measure will be a change in the patient’s medication. Secondary outcome measures will be findings in physical examination, detecting complications, and patient satisfaction. A difference of less than 20% in the aggregated level of agreement between the two study groups will be used to identify if videoconference is non-inferior to traditional mode of clinical care (face-to-face). Discussion Despite rapid growth in application of telemedicine in a variety of medical specialities, little is known about the reliability of videoconferencing for remote consultation of people with diabetes. Results of this proposed study will provide evidence of the reliability of specialist consultation offered by videoconference for people with diabetes. Trial registration number Australian New Zealand Clinical Trials Registry ACTRN12612000315819. PMID:24528569
Buvik, Astrid; Bugge, Einar; Knutsen, Gunnar; Småbrekke, Arvid; Wilsgaard, Tom
2016-09-08
Decentralised services using outreach clinics or modern technology are methods to reduce both patient transports and costs to the healthcare system. Telemedicine consultations via videoconference are one such modality. Before new technologies are implemented, it is important to investigate both the quality of care given and the economic impact from the use of this new technology. The aim of this clinical trial was to study the quality of planned remote orthopaedic consultations by help of videoconference. We performed a randomised controlled trial (RCT) with two parallel groups: video-assisted remote consultations at a regional medical centre (RMC) as an intervention versus standard consultation in the orthopaedic outpatient clinic at the University Hospital of North Norway (UNN) as a control. The participants were patients referred to or scheduled for a consultation at the orthopaedic outpatient clinic. The orthopaedic surgeons evaluated each consultation they performed by completing a questionnaire. The primary outcome measurement was the difference in the sum score calculated from this questionnaire, which was evaluated by the non-inferiority of the intervention group. The study design was based on the intention to treat principle. Ancillary analyses regarding complications, the number of consultations per patient, operations, patients who were referred again and the duration of consultations were performed. Four-hundred patients were web-based randomised. Of these, 199 (98 %) underwent remote consultation and 190 (95 %) underwent standard consultation. The primary outcome, the sum score of the specialist evaluation, was significantly lower (i.e. 'better') at UNN compared to RMC (1.72 versus 1.82, p = 0.0030). The 90 % confidence interval (CI) for the difference in score (0.05, 0.17) was within the non-inferiority margin. The orthopaedic surgeons involved evaluated 98 % of the video-assisted consultations as 'good' or 'very good'. In the ancillary analyses, there was no significant difference between the two groups. This study supports the argument that it is safe to offer video-assisted consultations for selected orthopaedic patients. We did not find any serious events related to the mode of consultation. Further assessments of the economic aspects and patient satisfaction are needed before we can recommend its wider application. ClinicalTrials.gov identifier: NCT00616837.
Remote gaming on resource-constrained devices
NASA Astrophysics Data System (ADS)
Reza, Waazim; Kalva, Hari; Kaufman, Richard
2010-08-01
Games have become important applications on mobile devices. A mobile gaming approach known as remote gaming is being developed to support games on low cost mobile devices. In the remote gaming approach, the responsibility of rendering a game and advancing the game play is put on remote servers instead of the resource constrained mobile devices. The games rendered on the servers are encoded as video and streamed to mobile devices. Mobile devices gather user input and stream the commands back to the servers to advance game play. With this solution, mobile devices with video playback and network connectivity can become game consoles. In this paper we present the design and development of such a system and evaluate the performance and design considerations to maximize the end user gaming experience.
Fishery research in the Great Lakes using a low-cost remotely operated vehicle
Kennedy, Gregory W.; Brown, Charles L.; Argyle, Ray L.
1988-01-01
We used a MiniROVER MK II remotely operated vehicle (ROV) to collect ground-truth information on fish and their habitat in the Great Lakes that have traditionally been collected by divers, or with static cameras, or submersibles. The ROV, powered by 4 thrusters and controlled by the pilot at the surface, was portable and efficient to operate throughout the Great Lakes in 1987, and collected a total of 30 h of video data recorded for later analysis. We collected 50% more substrate information per unit of effort with the ROV than with static cameras. Fish behavior ranged from no avoidance reaction in ambient light, to erratic responses in the vehicle lights. The ROV's field of view depended on the time of day, light levels, and density of zooplankton. Quantification of the data collected with the ROV (either physical samples or video image data) will serve to enhance the use of the ROV as a research tool to conduct fishery research on the Great Lakes.
Video framerate, resolution and grayscale tradeoffs for undersea telemanipulator
NASA Technical Reports Server (NTRS)
Ranadive, V.; Sheridan, T. B.
1981-01-01
The product of Frame Rate (F) in frames per second, Resolution (R) in total pixels and grayscale in bits (G) equals the transmission band rate in bits per second. Thus for a fixed channel capacity there are tradeoffs between F, R and G in the actual sampling of the picture for a particular manual control task in the present case remote undersea manipulation. A manipulator was used in the MASTER/SLAVE mode to study these tradeoffs. Images were systematically degraded from 28 frames per second, 128 x 128 pixels and 16 levels (4 bits) grayscale, with various FRG combinations constructed from a real-time digitized (charge-injection) video camera. It was found that frame rate, resolution and grayscale could be independently reduced without preventing the operator from accomplishing his/her task. Threshold points were found beyond which degradation would prevent any successful performance. A general conclusion is that a well trained operator can perform familiar remote manipulator tasks with a considerably degrade picture, down to 50 K bits/ sec.
OzBot and haptics: remote surveillance to physical presence
NASA Astrophysics Data System (ADS)
Mullins, James; Fielding, Mick; Nahavandi, Saeid
2009-05-01
This paper reports on robotic and haptic technologies and capabilities developed for the law enforcement and defence community within Australia by the Centre for Intelligent Systems Research (CISR). The OzBot series of small and medium surveillance robots have been designed in Australia and evaluated by law enforcement and defence personnel to determine suitability and ruggedness in a variety of environments. Using custom developed digital electronics and featuring expandable data busses including RS485, I2C, RS232, video and Ethernet, the robots can be directly connected to many off the shelf payloads such as gas sensors, x-ray sources and camera systems including thermal and night vision. Differentiating the OzBot platform from its peers is its ability to be integrated directly with haptic technology or the 'haptic bubble' developed by CISR. Haptic interfaces allow an operator to physically 'feel' remote environments through position-force control and experience realistic force feedback. By adding the capability to remotely grasp an object, feel its weight, texture and other physical properties in real-time from the remote ground control unit, an operator's situational awareness is greatly improved through Haptic augmentation in an environment where remote-system feedback is often limited.
Airborne Navigation Remote Map Reader Evaluation.
1986-03-01
EVALUATION ( James C. Byrd Intergrated Controls/Displays Branch SAvionics Systems Division Directorate of Avionics Engineering SMarch 1986 Final Report...Resolution 15 3.2 Accuracy 15 3.3 Symbology 15 3.4 Video Standard 18 3.5 Simulator Control Box 18 3.6 Software 18 3.7 Display Performance 21 3.8 Reliability 24...can be selected depending on the detail required and will automatically be presented at his present position. .The French RMR uses a Flying Spot Scanner
Testbed for remote telepresence research
NASA Astrophysics Data System (ADS)
Adnan, Sarmad; Cheatham, John B., Jr.
1992-11-01
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
NASA Technical Reports Server (NTRS)
1994-01-01
This video contains two segments: one a 0:01:50 spot and the other a 0:08:21 feature. Dante 2, an eight-legged walking machine, is shown during field trials as it explores the inner depths of an active volcano at Mount Spurr, Alaska. A NASA sponsored team at Carnegie Mellon University built Dante to withstand earth's harshest conditions, to deliver a science payload to the interior of a volcano, and to report on its journey to the floor of a volcano. Remotely controlled from 80-miles away, the robot explored the inner depths of the volcano and information from onboard video cameras and sensors was relayed via satellite to scientists in Anchorage. There, using a computer generated image, controllers tracked the robot's movement. Ultimately the robot team hopes to apply the technology to future planetary missions.
Smartphone based automatic organ validation in ultrasound video.
Vaish, Pallavi; Bharath, R; Rajalakshmi, P
2017-07-01
Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.
NASA Astrophysics Data System (ADS)
Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos
2014-05-01
In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.
NASA Astrophysics Data System (ADS)
Kottmann, R.; Ratmeyer, V.; Pop Ristov, A.; Boetius, A.
2012-04-01
More and more seagoing scientific expeditions use video-controlled research platforms such as Remote Operating Vehicles (ROV), Autonomous Underwater Vehicles (AUV), and towed camera systems. These produce many hours of video material which contains detailed and scientifically highly valuable footage of the biological, chemical, geological, and physical aspects of the oceans. Many of the videos contain unique observations of unknown life-forms which are rare, and which cannot be sampled and studied otherwise. To make such video material online accessible and to create a collaborative annotation environment the "Video Annotation and processing platform" (V-App) was developed. A first solely web-based installation for ROV videos is setup at the German Center for Marine Environmental Sciences (available at http://videolib.marum.de). It allows users to search and watch videos with a standard web browser based on the HTML5 standard. Moreover, V-App implements social web technologies allowing a distributed world-wide scientific community to collaboratively annotate videos anywhere at any time. It has several features fully implemented among which are: • User login system for fine grained permission and access control • Video watching • Video search using keywords, geographic position, depth and time range and any combination thereof • Video annotation organised in themes (tracks) such as biology and geology among others in standard or full screen mode • Annotation keyword management: Administrative users can add, delete, and update single keywords for annotation or upload sets of keywords from Excel-sheets • Download of products for scientific use This unique web application system helps making costly ROV videos online available (estimated cost range between 5.000 - 10.000 Euros per hour depending on the combination of ship and ROV). Moreover, with this system each expert annotation adds instantaneous available and valuable knowledge to otherwise uncharted material.
Healthcare Supported by Data Mule Networks in Remote Communities of the Amazon Region
Coutinho, Mauro Margalho; Efrat, Alon; Richa, Andrea
2014-01-01
This paper investigates the feasibility of using boats as data mule nodes, carrying medical ultrasound videos from remote and isolated communities in the Amazon region in Brazil, to the main city of that area. The videos will be used by physicians to perform remote analysis and follow-up routine of prenatal examinations of pregnant women. Two open source simulators (the ONE and NS-2) were used to evaluate the results obtained utilizing a CoDPON (continuous displacement plan oriented network). The simulations took into account the connection times between the network nodes (boats) and the number of nodes on each boat route. PMID:27433519
Cable and Line Inspection Mechanism
NASA Technical Reports Server (NTRS)
Ross, Terence J. (Inventor)
2003-01-01
An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.
Cable and line inspection mechanism
NASA Technical Reports Server (NTRS)
Ross, Terence J. (Inventor)
2003-01-01
An automated cable and line inspection mechanism visually scans the entire surface of a cable as the mechanism travels along the cable=s length. The mechanism includes a drive system, a video camera, a mirror assembly for providing the camera with a 360 degree view of the cable, and a laser micrometer for measuring the cable=s diameter. The drive system includes an electric motor and a plurality of drive wheels and tension wheels for engaging the cable or line to be inspected, and driving the mechanism along the cable. The mirror assembly includes mirrors that are positioned to project multiple images of the cable on the camera lens, each of which is of a different portion of the cable. A data transceiver and a video transmitter are preferably employed for transmission of video images, data and commands between the mechanism and a remote control station.
Zhao, Ting; Pi, Hong-Ying; Ku, Hong-An; Pan, Li; Gong, Zhu-Yun
2018-02-08
To investigate establishing, applying and evaluating the fall prevention and control information system in elderly community. Relying on internet technology and informatization means, the fall comprehensive prevention and control strategy of elderly was guided into online from offline. The fall prevention and control information system which was a collection of risk assessment, remote education and feedback was established. One hundred and twenty-six elderly (over 60 years old) in community were screened in this study and 84 high-risk elders who were involved in the remote continuous comprehensive intervention were screened out. Intervening measures included distributing propaganda album, making mission slides and video used to play with the interpretation remotely. Then fall related situation before and after intervention was analyzed and the effectiveness of system evaluated. After remote intervention, the fall incidence of high-risk group decreased from 21.43% to 4.76%( P <0.01). The body balance and gait stability improved clearly( P <0. 01). The rate of taking proper prevention and control behavior significantly improved( P <0.01). They believed in themselves not to fall down with more confidence when taking complex behaviors( P <0.01). The security of environment at home significantly enhanced( P <0. 01). Fall prevention and control information system in elderly community was innovative and convenient. The system could roundly assess the status related to fall and accurately screen out high-risk group. The system could implement the remote continuous comprehensive intervention so that the incident of fall was decrease. In conclusion, the system is stable and effective, can be further popularization and application as a successful pilot.
Presence in Video-Mediated Interactions: Case Studies at CSIRO
NASA Astrophysics Data System (ADS)
Alem, Leila
Although telepresence and a sense of connectedness with others are frequently mentioned in media space studies, as far as we know, none of these studies report attempts at assessing this critical aspect of user experience. While some attempts have been made to measure presence in virtual reality or augmented reality, (a comprehensive review of existing measures is available in Baren and Ijsselsteijn [2004]), very little work has been reported in measuring presence in video-mediated collaboration systems. Traditional studies of video-mediated collaboration have mostly focused their evaluation on measures of task performance and user satisfaction. Videoconferencing systems can be seen as a type of media space; they rely on technologies of audio, video, and computing put together to create an environment extending the embodied mind. This chapter reports on a set of video-mediated collaboration studies conducted at CSIRO in which different aspects of presence are being investigated. The first study reports the sense of physical presence a specialist doctor experiences when engaged in a remote consultation of a patient using the virtual critical care unit (Alem et al., 2006). The Viccu system is an “always-on” system connecting two hospitals (Li et al., 2006). The presence measure focuses on the extent to which users of videoconferencing systems feel physically present in the remote location. The second study reports the sense of social presence users experience when playing a game of charades with remote partners using a video conference link (Kougianous et al., 2006). In this study the presence measure focuses on the extent to which users feel connected with their remote partners. The third study reports the sense of copresence users experience when building collaboratively a piece of Lego toy (Melo and Alem, 2007). The sense of copresence is the extent to which users feel present with their remote partner. In this final study the sense of copresence is investigated by looking at the word used by users when referring to the physical objects they are manipulating during their interaction as well as when referring to locations in the collaborative workspace. We believe that such efforts provide a solid stepping stone for evaluating and analyzing future media spaces.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
Secure Internet video conferencing for assessing acute medical problems in a nursing facility.
Weiner, M.; Schadow, G.; Lindbergh, D.; Warvel, J.; Abernathy, G.; Dexter, P.; McDonald, C. J.
2001-01-01
Although video-based teleconferencing is becoming more widespread in the medical profession, especially for scheduled consultations, applications for rapid assessment of acute medical problems are rare. Use of such a video system in a nursing facility may be especially beneficial, because physicians are often not immediately available to evaluate patients. We have assembled and tested a portable, wireless conferencing system to prepare for a randomized trial of the system s influence on resource utilization and satisfaction. The system includes a rolling cart with video conferencing hardware and software, a remotely controllable digital camera, light, wireless network, and battery. A semi-automated paging system informs physicians of patient s study status and indications for conferencing. Data transmission occurs wirelessly in the nursing home and then through Internet cables to the physician s home. This provides sufficient bandwidth to support quality motion images. IPsec secures communications. Despite human and technical challenges, this system is affordable and functional. Images Figure 1 PMID:11825286
Implementation of smart phone video plethysmography and dependence on lighting parameters.
Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue
2015-08-01
The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (<;500 lux) and highbrightness (>4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.
Video in the Outback: An Evaluation of the Loan Video Programme in Western Australia.
ERIC Educational Resources Information Center
Hosie, Peter
This study was conducted to examine the reactions of children, parents, and teachers to the Loan Video Programme in Western Australia, which supplies videocassette recordings of the ABC (Australian Broadcasting Commission) school broadcasts to primary Distance Education Centre and School of the Air students in remote locations. Findings reported…
Headlines: Planet Earth: Improving Climate Literacy with Short Format News Videos
NASA Astrophysics Data System (ADS)
Tenenbaum, L. F.; Kulikov, A.; Jackson, R.
2012-12-01
One of the challenges of communicating climate science is the sense that climate change is remote and unconnected to daily life--something that's happening to someone else or in the future. To help face this challenge, NASA's Global Climate Change website http://climate.nasa.gov has launched a new video series, "Headlines: Planet Earth," which focuses on current climate news events. This rapid-response video series uses 3D video visualization technology combined with real-time satellite data and images, to throw a spotlight on real-world events.. The "Headlines: Planet Earth" news video products will be deployed frequently, ensuring timeliness. NASA's Global Climate Change Website makes extensive use of interactive media, immersive visualizations, ground-based and remote images, narrated and time-lapse videos, time-series animations, and real-time scientific data, plus maps and user-friendly graphics that make the scientific content both accessible and engaging to the public. The site has also won two consecutive Webby Awards for Best Science Website. Connecting climate science to current real-world events will contribute to improving climate literacy by making climate science relevant to everyday life.
Use of telemedicine in the remote programming of cochlear implants.
Ramos, Angel; Rodriguez, Carina; Martinez-Beneyto, Paz; Perez, Daniel; Gault, Alexandre; Falcon, Juan Carlos; Boyle, Patrick
2009-05-01
Remote cochlear implant (CI) programming is a viable, safe, user-friendly and cost-effective procedure, equivalent to standard programming in terms of efficacy and user's perception, which can complement the standard procedures. The potential benefits of this technique are outlined. We assessed the technical viability, risks and difficulties of remote CI programming; and evaluated the benefits for the user comparing the standard on-site CI programming versus the remote CI programming. The Remote Programming System (RPS) basically consists of completing the habitual programming protocol in a regular CI centre, assisted by local staff, although guided by a remote expert, who programs the CI device using a remote programming station that takes control of the local station through the Internet. A randomized prospective study has been designed with the appropriate controls comparing RPS to the standard on-site CI programming. Study subjects were implanted adults with a HiRes 90K(R) CI with post-lingual onset of profound deafness and 4-12 weeks of device use. Subjects underwent two daily CI programming sessions either remote or standard, on 4 programming days separated by 3 month intervals. A total of 12 remote and 12 standard sessions were completed. To compare both CI programming modes we analysed: program parameters, subjects' auditory progress, subjects' perceptions of the CI programming sessions, and technical aspects, risks and difficulties of remote CI programming. Control of the local station from the remote station was carried out successfully and remote programming sessions were achieved completely and without incidents. Remote and standard program parameters were compared and no significant differences were found between the groups. The performance evaluated in subjects who had been using either standard or remote programs for 3 months showed no significant difference. Subjects were satisfied with both the remote and standard sessions. Safety was proven by checking emergency stops in different conditions. A very small delay was noticed that did not affect the ease of the fitting. The oral and video communication between the local and the remote equipment was established without difficulties and was of high quality.
Video Data Compression Study for Remote Sensors
1976-02-01
Information Tleory, ielnvie, V. Y,, .niar 28-31. [25% T. S, Huang and J, W. Woods, "Picture Bandwitdth Compresston by Linear Transfor- mktion and Block...U.S. DEPARTMENT OF COMMERCE National Technical Information Service AD-A023 845 VIDEO DATA COMPRESSION STUDY FOR REMOTE SENSORS .4 OHIO UNIVERSITY...eport has becn review~ed by the inforxnation Gf filct ; andI is rele’Rroa"v !:r t, e Nation,ýl Terhnilcal Information Service (flTIS) . SAt -w1K, i~: ll
Development of tools and techniques for monitoring underwater artifacts
NASA Astrophysics Data System (ADS)
Lazar, Iulian; Ghilezan, Alin; Hnatiuc, Mihaela
2016-12-01
The different assessments provide information on the best methods to approach an artifact. The presence and extent of potential threats to archaeology must also be determined. In this paper we present an underwater robot, built in the laboratory, able to identify the artifact and to get it to the surface. It is an underwater remotely operated vehicle (ROV) which can be controlled remotely from the shore, a boat or a control station and communication is possible through an Ethernet cable with a maximum length of 100 m. The robot is equipped with an IP camera which sends real time images that can be accessed anywhere from within the network. The camera also has a microSD card to store the video. The methods developed for data communication between the robot and the user is present. A communication protocol between the client and server is developed to control the ROV.
Virtual collaborative environments: programming and controlling robotic devices remotely
NASA Astrophysics Data System (ADS)
Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.
1995-12-01
This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.
Distributed computing testbed for a remote experimental environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butner, D.N.; Casper, T.A.; Howard, B.C.
1995-09-18
Collaboration is increasing as physics research becomes concentrated on a few large, expensive facilities, particularly in magnetic fusion energy research, with national and international participation. These facilities are designed for steady state operation and interactive, real-time experimentation. We are developing tools to provide for the establishment of geographically distant centers for interactive operations; such centers would allow scientists to participate in experiments from their home institutions. A testbed is being developed for a Remote Experimental Environment (REE), a ``Collaboratory.`` The testbed will be used to evaluate the ability of a remotely located group of scientists to conduct research on themore » DIII-D Tokamak at General Atomics. The REE will serve as a testing environment for advanced control and collaboration concepts applicable to future experiments. Process-to-process communications over high speed wide area networks provide real-time synchronization and exchange of data among multiple computer networks, while the ability to conduct research is enhanced by adding audio/video communication capabilities. The Open Software Foundation`s Distributed Computing Environment is being used to test concepts in distributed control, security, naming, remote procedure calls and distributed file access using the Distributed File Services. We are exploring the technology and sociology of remotely participating in the operation of a large scale experimental facility.« less
Remote Video Auditing in the Surgical Setting.
Pedersen, Anne; Getty Ritter, Elizabeth; Beaton, Megan; Gibbons, David
2017-02-01
Remote video auditing, a method first adopted by the food preparation industry, was later introduced to the health care industry as a novel approach to improving hand hygiene practices. This strategy yielded tremendous and sustained improvement, causing leaders to consider the potential effects of such technology on the complex surgical environment. This article outlines the implementation of remote video auditing and the first year of activity, outcomes, and measurable successes in a busy surgery department in the eastern United States. A team of anesthesia care providers, surgeons, and OR personnel used low-resolution cameras, large-screen displays, and cell phone alerts to make significant progress in three domains: application of the Universal Protocol for preventing wrong site, wrong procedure, wrong person surgery; efficiency metrics; and cleaning compliance. The use of cameras with real-time auditing and results-sharing created an environment of continuous learning, compliance, and synergy, which has resulted in a safer, cleaner, and more efficient OR. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
s time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e207712 - iss042e209132 ). Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e203119 - iss042e203971). Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
The use of open data from social media for the creation of 3D georeferenced modeling
NASA Astrophysics Data System (ADS)
Themistocleous, Kyriacos
2016-08-01
There is a great deal of open source video on the internet that is posted by users on social media sites. With the release of low-cost unmanned aerial vehicles, many hobbyists are uploading videos from different locations, especially in remote areas. Using open source data that is available on the internet, this study utilized structure to motion (SfM) as a range imaging technique to estimate 3 dimensional landscape features from 2 dimensional image sequences subtracted from video, applied image distortion correction and geo-referencing. This type of documentation may be necessary for cultural heritage sites that are inaccessible or documentation is difficult, where we can access video from Unmanned Aerial Vehicles (UAV). These 3D models can be viewed using Google Earth, create orthoimage, drawings and create digital terrain modeling for cultural heritage and archaeological purposes in remote or inaccessible areas.
NASA Astrophysics Data System (ADS)
Minamoto, Masahiko; Matsunaga, Katsuya
1999-05-01
Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.
NASA Technical Reports Server (NTRS)
Martin, David S.; Borowski, Allan; Bungo, Michael W.; Gladding, Patrick; Greenberg, Neil; Hamilton, Doug; Levine, Benjamin D.; Lee, Stuart M.; Norwood, Kelly; Platts, Steven H.;
2012-01-01
Methods: In the year before launch of an ISS mission, potential astronaut echocardiographic operators participate in 5 sessions to train for echo acquisitions that occur roughly monthly during the mission, including one exercise echocardiogram. The focus of training is familiarity with the study protocol and remote guidance procedures. On-orbit, real-time guidance of in-flight acquisitions is provided by a sonographer in the Telescience Center of Mission Control. Physician investigators with remote access are able to relay comments on image quality to the sonographer. Live video feed is relayed from the ISS to the ground via the Tracking and Data Relay Satellite System with a 2- second transmission delay. The expert sonographer uses these images, along with twoway audio, to provide instructions and feedback. Images are stored in non-compressed DICOM format for asynchronous relay to the ground for subsequent off-line analysis. Results: Since June, 2009, a total of 27 resting echocardiograms and 5 exercise studies have been performed during flight. Average acquisition time has been 45 minutes, reflecting 26,000 km of ISS travel per study. Image quality has been adequate in all studies, and remote guidance has proven imperative for fine-tuning imaging and prioritizing views when communication outages limit the study duration. Typical resting studies have included 27 video loops and 30 still-frame images requiring 750 MB of storage. Conclusions: Despite limited crew training, remote guidance allows research-quality echocardiography to be performed by non-experts aboard the ISS. Analysis is underway and additional subjects are being recruited to define the impact of microgravity on cardiac structure and systolic and diastolic function.
Design of multifunction anti-terrorism robotic system based on police dog
NASA Astrophysics Data System (ADS)
You, Bo; Liu, Suju; Xu, Jun; Li, Dongjie
2007-11-01
Aimed at some typical constraints of police dogs and robots used in the areas of reconnaissance and counterterrorism currently, the multifunction anti-terrorism robotic system based on police dog has been introduced. The system is made up of two parts: portable commanding device and police dog robotic system. The portable commanding device consists of power supply module, microprocessor module, LCD display module, wireless data receiving and dispatching module and commanding module, which implements the remote control to the police dogs and takes real time monitor to the video and images. The police dog robotic system consists of microprocessor module, micro video module, wireless data transmission module, power supply module and offence weapon module, which real time collects and transmits video and image data of the counter-terrorism sites, and gives military attack based on commands. The system combines police dogs' biological intelligence with micro robot. Not only does it avoid the complexity of general anti-terrorism robots' mechanical structure and the control algorithm, but it also widens the working scope of police dog, which meets the requirements of anti-terrorism in the new era.
Increased ISR operator capability utilizing a centralized 360° full motion video display
NASA Astrophysics Data System (ADS)
Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.
2012-06-01
In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).
Remote Laboratory and Animal Behaviour: An Interactive Open Field System
ERIC Educational Resources Information Center
Fiore, Lorenzo; Ratti, Giovannino
2007-01-01
Remote laboratories can provide distant learners with practical acquisitions which would otherwise remain precluded. Our proposal here is a remote laboratory on a behavioural test (open field test), with the aim of introducing learners to the observation and analysis of stereotyped behaviour in animals. A real-time video of a mouse in an…
Synchronous, Remote, Internet Conferencing with Unique Populations in Various Settings.
ERIC Educational Resources Information Center
Mallory, James R.; MacKenzie, Douglas
This paper focuses on the authors' experiences with interactive, synchronous Internet video conferencing using Microsoft's NetMeeting software with deaf and hard-of-hearing students in two different settings. One setting involved teaching and tutoring computer programming to remote deaf and hard-of-hearing students in a remote situation using…
Stephenson, Rob; Freeland, Ryan; Sullivan, Stephen P; Riley, Erin; Johnson, Brent A; Mitchell, Jason; McFarland, Deborah; Sullivan, Patrick S
2017-05-30
HIV prevalence remains high among men who have sex with men (MSM) in the United States, yet the majority of research has focused on MSM as individuals, not as dyads, and has discussed HIV risks primarily in the context of casual sex. Nexus is an online prevention program that combines home-based HIV testing and couples HIV testing and counseling (CHTC). It allows partners in dyadic MSM relationships to receive HIV testing and care in the comfort of their designated residence, via video-based chat. By using video-based technologies (eg, VSee video chat), male couples receive counseling and support from a remote online counselor, while testing for HIV at home. This randomized control trial (RCT) aims to examine the effects of video-based counseling combined with home-based HIV testing on couples' management of HIV risk, formation and adherence to explicit sexual agreements, and sexual risk-taking. The research implements a prospective RCT of 400 online-recruited male couples: 200 self-reported concordant-negative couples and 200 self-reported discordant couples. Couples in the control arm will receive one or two home-based HIV self-testing kits and will be asked to report their results via the study's website. Couples in the experimental arm will receive one or two home-based HIV self-testing kits and will conduct these tests together under the facilitation of a remotely located counselor during a prescheduled VSee-based video CHTC session. Study assessments are taken at baseline, as well as at 3- and 6-month follow-up sessions. Project Nexus was launched in April 2016 and is ongoing. To date, 219 eligible couples have been enrolled and randomized. Combining home-based HIV testing with video-based counseling creates an opportunity to expand CHTC to male couples who (1) live outside metro areas, (2) live in rural areas without access to testing services or LGBTQ resources, or (3) feel that current clinic-based testing is not for them (eg, due to fears of discrimination associated with HIV and/or sexuality). ClinicalTrials.gov NCT02335138; https://clinicaltrials.gov/ct2/show/NCT02335138 (Archived by WebCite at http://www.webcitation.org/6qHxtNIdW). ©Rob Stephenson, Ryan Freeland, Stephen P Sullivan, Erin Riley, Brent A Johnson, Jason Mitchell, Deborah McFarland, Patrick S Sullivan. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 30.05.2017.
NASA Technical Reports Server (NTRS)
1994-01-01
This video presents two examples of NASA Technology Transfer. The first is a Downhole Video Logger, which uses remote sensing technology to help in mining. The second example is the use of satellite image processing technology to enhance ultrasound images taken during pregnancy.
Monitoring and diagnosis of vegetable growth based on internet of things
NASA Astrophysics Data System (ADS)
Zhang, Qian; Yu, Feng; Fu, Rong; Li, Gang
2017-10-01
A new condition monitoring method of vegetable growth was proposed, which was based on internet of things. It was combined remote environmental monitoring, video surveillance, intelligently decision-making and two-way video consultation together organically.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e211498 - iss042e212135). Shows Earth views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e162807 - iss042e163936). Shows Earth views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e193144 - iss042e194102). Shows Earth views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e209133 - iss042e210379). Shows Earth views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e215401 -iss042e215812). Shows Earth views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e290689 - iss042e291289). Shows Earth views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e249923 - iss042e250759). Shows Earth views. Space Station Remote Manipulator system (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e170341 - iss042e171462). Shows Earth views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e244330 - iss042e245101). Shows Earth views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
Hori, Kenta; Kuroda, Tomohiro; Oyama, Hiroshi; Ozaki, Yasuhiko; Nakamura, Takehiko; Takahashi, Takashi
2005-12-01
For faultless collaboration among the surgeon, surgical staffs, and surgical robots in telesurgery, communication must include environmental information of the remote operating room, such as behavior of robots and staffs, vital information of a patient, named supporting information, in addition to view of surgical field. "Surgical Cockpit System, " which is a telesurgery support system that has been developed by the authors, is mainly focused on supporting information exchange between remote sites. Live video presentation is important technology for Surgical Cockpit System. Visualization method to give precise location/posture of surgical instruments is indispensable for accurate control and faultless operation. In this paper, the authors propose three-side-view presentation method for precise location/posture control of surgical instruments in telesurgery. The experimental results show that the proposed method improved accurate positioning of a telemanipulator.
Jiao, Yang; Xu, Liang; Gao, Min-Guang; Feng, Ming-Chun; Jin, Ling; Tong, Jing-Jing; Li, Sheng
2012-07-01
Passive remote sensing by Fourier-transform infrared (FTIR) spectrometry allows detection of air pollution. However, for the localization of a leak and a complete assessment of the situation in the case of the release of a hazardous cloud, information about the position and the distribution of a cloud is essential. Therefore, an imaging passive remote sensing system comprising an interferometer, a data acquisition and processing software, scan system, a video system, and a personal computer has been developed. The remote sensing of SF6 was done. The column densities of all directions in which a target compound has been identified may be retrieved by a nonlinear least squares fitting algorithm and algorithm of radiation transfer, and a false color image is displayed. The results were visualized by a video image, overlaid by false color concentration distribution image. The system has a high selectivity, and allows visualization and quantification of pollutant clouds.
ERIC Educational Resources Information Center
Braunstein, Jean; And Others
The major purpose of the Health, Education, Telecommunications experiment was to demonstrate the feasibility of distributing video materials to a large number of low-cost earth terminals located in rural areas. The receivers are of two types: one-way video receivers for the reception of video programs, and two-way voice/data terminals which permit…
Galaiduk, Ronen; Radford, Ben T; Wilson, Shaun K; Harvey, Euan S
2017-12-15
Information on habitat associations from survey data, combined with spatial modelling, allow the development of more refined species distribution modelling which may identify areas of high conservation/fisheries value and consequentially improve conservation efforts. Generalised additive models were used to model the probability of occurrence of six focal species after surveys that utilised two remote underwater video sampling methods (i.e. baited and towed video). Models developed for the towed video method had consistently better predictive performance for all but one study species although only three models had a good to fair fit, and the rest were poor fits, highlighting the challenges associated with modelling habitat associations of marine species in highly homogenous, low relief environments. Models based on baited video dataset regularly included large-scale measures of structural complexity, suggesting fish attraction to a single focus point by bait. Conversely, models based on the towed video data often incorporated small-scale measures of habitat complexity and were more likely to reflect true species-habitat relationships. The cost associated with use of the towed video systems for surveying low-relief seascapes was also relatively low providing additional support for considering this method for marine spatial ecological modelling.
Clay-Williams, Robyn; Baysari, Melissa; Taylor, Natalie; Zalitis, Dianne; Georgiou, Andrew; Robinson, Maureen; Braithwaite, Jeffrey; Westbrook, Johanna
2017-08-14
Telephone consultation and triage services are increasingly being used to deliver health advice. Availability of high speed internet services in remote areas allows healthcare providers to move from telephone to video telehealth services. Current approaches for assessing video services have limitations. This study aimed to identify the challenges for service providers associated with transitioning from audio to video technology. Using a mixed-method, qualitative approach, we observed training of service providers who were required to switch from telephone to video, and conducted pre- and post-training interviews with 15 service providers and their trainers on the challenges associated with transitioning to video. Two full days of simulation training were observed. Data were transcribed and analysed using an inductive approach; a modified constant comparative method was employed to identify common themes. We found three broad categories of issues likely to affect implementation of the video service: social, professional, and technical. Within these categories, eight sub-themes were identified; they were: enhanced delivery of the health service, improved health advice for people living in remote areas, safety concerns, professional risks, poor uptake of video service, system design issues, use of simulation for system testing, and use of simulation for system training. This study identified a number of unexpected potential barriers to successful transition from telephone to the video system. Most prominent were technical and training issues, and personal safety concerns about transitioning from telephone to video media. Addressing identified issues prior to implementation of a new video telehealth system is likely to improve effectiveness and uptake.
Advancements in remote physiological measurement and applications in human-computer interaction
NASA Astrophysics Data System (ADS)
McDuff, Daniel
2017-04-01
Physiological signals are important for tracking health and emotional states. Imaging photoplethysmography (iPPG) is a set of techniques for remotely recovering cardio-pulmonary signals from video of the human body. Advances in iPPG methods over the past decade combined with the ubiquity of digital cameras presents the possibility for many new, lowcost applications of physiological monitoring. This talk will highlight methods for recovering physiological signals, work characterizing the impact of video parameters and hardware on these measurements, and applications of this technology in human-computer interfaces.
Remote console for virtual telerehabilitation.
Lewis, Jeffrey A; Boian, Rares F; Burdea, Grigore; Deutsch, Judith E
2005-01-01
The Remote Console (ReCon) telerehabilitation system provides a platform for therapists to guide rehabilitation sessions from a remote location. The ReCon system integrates real-time graphics, audio/video communication, private therapist chat, post-test data graphs, extendable patient and exercise performance monitoring, exercise pre-configuration and modification under a single application. These tools give therapists the ability to conduct training, monitoring/assessment, and therapeutic intervention remotely and in real-time.
Fiber optic cable-based high-resolution, long-distance VGA extenders
NASA Astrophysics Data System (ADS)
Rhee, Jin-Geun; Lee, Iksoo; Kim, Heejoon; Kim, Sungjoon; Koh, Yeon-Wan; Kim, Hoik; Lim, Jiseok; Kim, Chur; Kim, Jungwon
2013-02-01
Remote transfer of high-resolution video information finds more applications in detached display applications for large facilities such as theaters, sports complex, airports, and security facilities. Active optical cables (AOCs) provide a promising approach for enhancing both the transmittable resolution and distance that standard copper-based cables cannot reach. In addition to the standard digital formats such as HDMI, the high-resolution, long-distance transfer of VGA format signals is important for applications where high-resolution analog video ports should be also supported, such as military/defense applications and high-resolution video camera links. In this presentation we present the development of a compressionless, high-resolution (up to WUXGA, 1920x1200), long-distance (up to 2 km) VGA extenders based on serialized technique. We employed asynchronous serial transmission and clock regeneration techniques, which enables lower cost implementation of VGA extenders by removing the necessity for clock transmission and large memory at the receiver. Two 3.125-Gbps transceivers are used in parallel to meet the required maximum video data rate of 6.25 Gbps. As the data are transmitted asynchronously, 24-bit pixel clock time stamp is employed to regenerate video pixel clock accurately at the receiver side. In parallel to the video information, stereo audio and RS-232 control signals are transmitted as well.
Remotely detected differential pulse transit time as a stress indicator
NASA Astrophysics Data System (ADS)
Kaur, Balvinder; Tarbox, Elizabeth; Cissel, Marty; Moses, Sophia; Luthra, Megha; Vaidya, Misha; Tran, Nhien; Ikonomidou, Vasiliki N.
2015-05-01
The human cardiovascular system, controlled by the autonomic nervous system (ANS), is one of the first sites where one can see the "fight-or-flight" response due to the presence of external stressors. In this paper, we investigate the possibility of detecting mental stress using a novel measure that can be measured in a contactless manner: Pulse transit time (dPTT), which refers to the time that is required for the blood wave (BW) to cover the distance from the heart to a defined remote location in the body. Loosely related to blood pressure, PTT is a measure of blood velocity, and is also implicated in the "fight-or-flight" response. We define the differential PTT (dPTT) as the difference in PTT between two remote areas of the body, such as the forehead and the palm. Expanding our previous work on remote BW detection from visible spectrum videos, we built a system that remotely measures dPTT. Human subject data were collected under an IRB approved protocol from 15 subjects both under normal and stress states and are used to initially establish the potential use of remote dPPT detection as a stress indicator.
1994-07-10
TEMPUS, an electromagnetic levitation facility that allows containerless processing of metallic samples in microgravity, first flew on the IML-2 Spacelab mission. The principle of electromagnetic levitation is used commonly in ground-based experiments to melt and then cool metallic melts below their freezing points without solidification occurring. The TEMPUS operation is controlled by its own microprocessor system; although commands may be sent remotely from the ground and real time adjustments may be made by the crew. Two video cameras, a two-color pyrometer for measuring sample temperatures, and a fast infrared detector for monitoring solidification spikes, will be mounted to the process chamber to facilitate observation and analysis. In addition, a dedicated high-resolution video camera can be attached to the TEMPUS to measure the sample volume precisely.
Stereo vision techniques for telescience
NASA Astrophysics Data System (ADS)
Hewett, S.
1990-02-01
The Botanic Experiment is one of the pilot experiments in the Telescience Test Bed program at the ESTEC research and technology center of the European Space Agency. The aim of the Telescience Test Bed is to develop the techniques required by an experimenter using a ground based work station for remote control, monitoring, and modification of an experiment operating on a space platform. The purpose of the Botanic Experiment is to examine the growth of seedlings under various illumination conditions with a video camera from a number of viewpoints throughout the duration of the experiment. This paper describes the Botanic Experiment and the points addressed in developing a stereo vision software package to extract quantitative information about the seedlings from the recorded video images.
Ocular examination for trauma; clinical ultrasound aboard the International Space Station.
Chiao, Leroy; Sharipov, Salizhan; Sargsyan, Ashot E; Melton, Shannon; Hamilton, Douglas R; McFarlin, Kellie; Dulchavsky, Scott A
2005-05-01
Ultrasound imaging is a successful modality in a broad variety of diagnostic applications including trauma. Ultrasound has been shown to be accurate when performed by non-radiologist physicians; recent reports have suggested that non-physicians can perform limited ultrasound examinations. A multipurpose ultrasound system is installed on the International Space Station (ISS) as a component of the Human Research Facility (HRF). This report documents the first ocular ultrasound examination conducted in space, which demonstrated the capability to assess physiologic alterations or pathology including trauma during long-duration space flight. An ISS crewmember with minimal sonography training was remotely guided by an imaging expert from Mission Control Center (MCC) through a comprehensive ultrasound examination of the eye. A multipurpose ultrasound imager was used in conjunction with a space-to-ground video downlink and two-way audio. Reference cards with topological reference points, hardware controls, and target images were used to facilitate the examination. Multiple views of the eye structures were obtained through a closed eyelid. Pupillary response to light was demonstrated by modifying the light exposure of the contralateral eye. A crewmember on the ISS was able to complete a comprehensive ocular examination using B- and M-mode ultrasonography with remote guidance from an expert in the MCC. Multiple anteroposterior, oblique, and coronal views of the eye clearly demonstrated the anatomic structures of both segments of the globe. The iris and pupil were readily visualized with probe manipulation. Pupillary diameter was assessed in real time in B- and M-mode displays. The anatomic detail and fidelity of ultrasound video were excellent and could be used to answer a variety of clinical and space physiologic questions. A comprehensive, high-quality ultrasound examination of the eye was performed with a multipurpose imager aboard the ISS by a non-expert operator using remote guidance. Ocular ultrasound images were of diagnostic quality despite the 2-second communication latency and the unconventional setting of a weightless spacecraft environment. The remote guidance techniques developed to facilitate this successful NASA research experiment will support wider applications of ultrasound for remote medicine on Earth including the assessment of pupillary reactions in patients with severe craniofacial trauma and swelling.
Ocular examination for trauma; clinical ultrasound aboard the International Space Station
NASA Technical Reports Server (NTRS)
Chiao, Leroy; Sharipov, Salizhan; Sargsyan, Ashot E.; Melton, Shannon; Hamilton, Douglas R.; McFarlin, Kellie; Dulchavsky, Scott A.
2005-01-01
BACKGROUND: Ultrasound imaging is a successful modality in a broad variety of diagnostic applications including trauma. Ultrasound has been shown to be accurate when performed by non-radiologist physicians; recent reports have suggested that non-physicians can perform limited ultrasound examinations. A multipurpose ultrasound system is installed on the International Space Station (ISS) as a component of the Human Research Facility (HRF). This report documents the first ocular ultrasound examination conducted in space, which demonstrated the capability to assess physiologic alterations or pathology including trauma during long-duration space flight. METHODS: An ISS crewmember with minimal sonography training was remotely guided by an imaging expert from Mission Control Center (MCC) through a comprehensive ultrasound examination of the eye. A multipurpose ultrasound imager was used in conjunction with a space-to-ground video downlink and two-way audio. Reference cards with topological reference points, hardware controls, and target images were used to facilitate the examination. Multiple views of the eye structures were obtained through a closed eyelid. Pupillary response to light was demonstrated by modifying the light exposure of the contralateral eye. RESULTS: A crewmember on the ISS was able to complete a comprehensive ocular examination using B- and M-mode ultrasonography with remote guidance from an expert in the MCC. Multiple anteroposterior, oblique, and coronal views of the eye clearly demonstrated the anatomic structures of both segments of the globe. The iris and pupil were readily visualized with probe manipulation. Pupillary diameter was assessed in real time in B- and M-mode displays. The anatomic detail and fidelity of ultrasound video were excellent and could be used to answer a variety of clinical and space physiologic questions. CONCLUSIONS: A comprehensive, high-quality ultrasound examination of the eye was performed with a multipurpose imager aboard the ISS by a non-expert operator using remote guidance. Ocular ultrasound images were of diagnostic quality despite the 2-second communication latency and the unconventional setting of a weightless spacecraft environment. The remote guidance techniques developed to facilitate this successful NASA research experiment will support wider applications of ultrasound for remote medicine on Earth including the assessment of pupillary reactions in patients with severe craniofacial trauma and swelling.
Towards real-time remote processing of laparoscopic video
NASA Astrophysics Data System (ADS)
Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.
2015-03-01
Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.
Selective visual region of interest to enhance medical video conferencing
NASA Astrophysics Data System (ADS)
Bonneau, Walt, Jr.; Read, Christopher J.; Shirali, Girish
1998-06-01
The continued economic pressure that is being placed upon the healthcare industry creates both challenge and opportunity to develop cost effective healthcare tools. Tools that provide improvements in the quality of medical care at the same time improve the distribution of efficient care will create product demand. Video Conferencing systems are one of the latest product technologies that are evolving their way into healthcare applications. The systems that provide quality Bi- directional video and imaging at the lowest system and communication cost are creating many possible options for the healthcare industry. A method to use only 128k bits/sec. of ISDN bandwidth while providing quality video images in selected regions will be applied to echocardiograms using a low cost video conferencing system operating within a basic rate ISDN line bandwidth. Within a given display area (frame) it has been observed that only selected informational areas of the frame of are of value when viewing for detail and precision within an image. Much in the same manner that a photograph is cropped. If a method to accomplish Region Of Interest (ROI) was applied to video conferencing using H.320 with H.263 (compression) and H.281 (camera control) international standards, medical image quality could be achieved in a cost-effective manner. For example, the cardiologist could be provided with a selectable three to eight end-point viewable ROI polygon that defines the ROI in the image. This is achieved by the video system calculating the selected regional end-points and creating an alpha mask to signify the importance of the ROI to the compression processor. This region is then applied to the compression algorithm in a manner that the majority of the video conferencing processor cycles are focused on the ROI of the image. An occasional update of the non-ROI area is processed to maintain total image coherence. The user could control the non-ROI area updates. Providing encoder side ROI specification is of value. However, the power of this capability is improved if remote access and selection of the ROI is also provided. Using the H.281 camera standard and proposing an additional option to the standard to allow for remote ROI selection would make this possible. When ROI is applied the ability to reach the equivalent of 384K bits/sec ISDN rates may be achieved or exceeded depending upon the size of the selected ROI using 128K bits/sec. This opens additional opportunity to establish international calling and reduced call rates by up to sixty- six percent making reoccurring communication costs attractive. Rates of twenty to thirty quality ROI updates could be achieved. It is however important to understand that this technique is still under development.
Boudjema, Sophia; Tarantini, Clément; Peretti-Watel, Patrick; Brouqui, Philippe
2017-05-01
We used videorecordings of routine care to analyze health care providers' deviance from protocols and organized follow-up interviews that were conducted by an anthropologist and a nurse. After consent, health care workers were recorded during routine care by an automatic video remote control. Each participant was invited to watch her or his recorded behaviors on 2 different videos showing routine practices and her or his deviance from protocols, and to comment on them. After this step an in-depth interview based on preestablished guidelines was organized and explanations regarding the observed deviance was discussed. This design was intended to reveal the HCWs' subjectivity; that is, how they perceive hand hygiene issues in their daily routine, what concrete difficulties they face, and how they try to resolve them. We selected 43 of 250 videorecordings created during the study, which allowed us to study 15 out of 20 health care professionals. Twenty out of 43 videos showed 1 or more breaches in the hand hygiene protocol. The breaches were frequently linked to glove abuse. Deviance from protocols was explained by the health care workers as the result of an adaptive behavior; that is, facing work constraints that were disconnected from infection control protocols. Professional practices and protocols should be revisited to create simple messages that are adapted to the mandatory needs in a real life clinic environment. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Ultra-wide Range Gamma Detector System for Search and Locate Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odell, D. Mackenzie Odell; Harpring, Larry J.; Moore, Frank S. Jr.
2005-10-26
Collecting debris samples following a nuclear event requires that operations be conducted from a considerable stand-off distance. An ultra-wide range gamma detector system has been constructed to accomplish both long range radiation search and close range hot sample collection functions. Constructed and tested on a REMOTEC Andros platform, the system has demonstrated reliable operation over six orders of magnitude of gamma dose from 100's of uR/hr to over 100 R/hr. Functional elements include a remotely controlled variable collimator assembly, a NaI(Tl)/photomultiplier tube detector, a proprietary digital radiation instrument, a coaxially mounted video camera, a digital compass, and both local andmore » remote control computers with a user interface designed for long range operations. Long range sensitivity and target location, as well as close range sample selection performance are presented.« less
Security warning system monitors up to fifteen remote areas simultaneously
NASA Technical Reports Server (NTRS)
Fusco, R. C.
1966-01-01
Security warning system consisting of 15 television cameras is capable of monitoring several remote or unoccupied areas simultaneously. The system uses a commutator and decommutator, allowing time-multiplexed video transmission. This security system could be used in industrial and retail establishments.
[Practice of the use of remote telemedical consultations in "experimental area of work"].
Kalachev, O V; Plakhov, A N; Pershin, I V; Agapitov, A A; Andreev, A I; Yakovlev, A E
2016-02-01
The article presents experimental results of telehealth technology of "medical company--military hospital--central military hospital". Requirements for the equipment, which is used for telehealth consultations and software are specified. Throughout the test were practiced emergency consultations of "physician-physician" interface, including the use of mobile video call and portable terminals of videoconference, remote diagnosis with the use of medical equipment and devices. Data transmission features and video definition are received. The authors determined main types of telecommunication equipment, which are supposed to prospective for the Armed Forces. Prospects of implementation of telecommunication technologies are shown.
Development of a microportable imaging system for otoscopy and nasoendoscopy evaluations.
VanLue, Michael; Cox, Kenneth M; Wade, James M; Tapp, Kevin; Linville, Raymond; Cosmato, Charlie; Smith, Tom
2007-03-01
Imaging systems for patients with cleft palate typically are not portable, but are essential to obtain an audiovisual record of nasoendoscopy and otoscopy procedures. Practitioners who evaluate patients in rural, remote, or otherwise medically underserved areas are expected to obtain audiovisual recordings of these procedures as part of standard clinical practice. Therefore, patients must travel substantial distances to medical facilities that have standard recording equipment. This project describes the specific components, strengths and weaknesses of an MPEG-4 digital recording system for otoscopy/nasoendoscopy evaluation of patients with cleft palate that is both portable and compatible with store-and-forward telemedicine applications. Three digital recording configurations (TabletPC, handheld digital video recorder, and an 8-mm digital camcorder) were used to record the audio/ video signal from an analog video scope system. The handheld digital video recorder was most effective at capturing audio/video and displaying procedures in real time. The system described was particularly easy to use, because it required no postrecording file capture or compression for later review, transfer, and/or archiving. The handheld digital recording system was assembled from commercially available components. The portability and the telemedicine compatibility of the handheld digital video recorder offers a viable solution for the documentation of nasoendosocopy and otoscopy procedures in remote, rural, or other locations where reduced medical access precludes the use of larger component audio/video systems.
MS Mastracchio operates the RMS on the flight deck of Atlantis during STS-106
2000-09-11
STS106-E-5099 (11 September 2000) --- Astronaut Richard A. Mastracchio, mission specialist, stands near viewing windows, video monitors and the controls for the remote manipulator system (RMS) arm (out of frame at left) on the flight deck of the Earth-orbiting Space Shuttle Atlantis during Flight Day 3 activity. Atlantis was docked with the International Space Station (ISS) when this photo was recorded with an electronic still camera (ESC).
Study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Kipp, G.
1992-01-01
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e103580 - iss042e104044). Shows night time Earth views. Solar Array Wing (SAW) and Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e196791 - iss042e197504). Shows Earth views. Day time views turn into night time views. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
Color image processing and object tracking workstation
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Paulick, Michael J.
1992-01-01
A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.
NASA Technical Reports Server (NTRS)
Martin, David; Borowski, Allan; Bungo, Michael W.; Dulchavsky, Scott; Gladding, Patrick; Greenberg, Neil; Hamilton, Doug; Levine, Benjamin D.; Norwoord, Kelly; Platts, Steven H.;
2011-01-01
Echocardiography is ideally suited for cardiovascular imaging in remote environments, but the expertise to perform it is often lacking. In 2001, an ATL HDI5000 was delivered to the International Space Station (ISS). The instrument is currently being used in a study to investigate the impact of long-term microgravity on cardiovascular function. The purpose of this report is to describe the methodology for remote guidance of echocardiography in space. Methods: In the year before launch of an ISS mission, potential astronaut echocardiographic operators participate in 5 sessions to train for echo acquisitions that occur roughly monthly during the mission, including one exercise echocardiogram. The focus of training is familiarity with the study protocol and remote guidance procedures. On-orbit, real-time guidance of in-flight acquisitions is provided by a sonographer in the Telescience Center of Mission Control. Physician investigators with remote access are able to relay comments on image optimization to the sonographer. Live video feed is relayed from the ISS to the ground via the Tracking and Data Relay Satellite System with a 2 second transmission delay. The expert sonographer uses these images along with two-way audio to provide instructions and feedback. Images are stored in non-compressed DICOM format for asynchronous relay to the ground for subsequent off-line analysis. Results: Since June, 2009, a total of 19 resting echocardiograms and 4 exercise studies have been performed in-flight. Average acquisition time has been 45 minutes, reflecting 26,000 km of ISS travel per study. Image quality has been adequate in all studies, but remote guidance has proven imperative for fine-tuning imaging and prioritizing views when communication outages limit the study duration. Typical resting studies have included 12 video loops and 21 still-frame images requiring 750 MB of storage. Conclusions: Despite limited crew training, remote guidance allows research-quality echocardiography to be performed by non-experts aboard the ISS. Analysis is underway and additional subjects are being recruited to define the impact of microgravity on cardiac structure and systolic and diastolic function.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen works on the recently acquired Contraves-Goerz Kineto Tracking Mount (KTM). Trailer-mounted with a center console/seat and electric drive tracking mount, the KTM includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff. There are 10 KTMs certified for use on the Eastern Range.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen checks out one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
Free-flying teleoperator requirements and conceptual design.
NASA Technical Reports Server (NTRS)
Onega, G. T.; Clingman, J. H.
1973-01-01
A teleoperator, as defined by NASA, is a remotely controlled cybernetic man-machine system designed to augment and extend man's sensory, manipulative, and cognitive capabilities. Teleoperator systems can fulfill an important function in the Space Shuttle program. They can retrieve automated satellites for refurbishment and reuse. Cargo can be transferred over short or large distances and orbital operations can be supported. A requirements analysis is discussed, giving attention to the teleoperator spacecraft, docking and stowage systems, display and controls, propulsion, guidance, navigation, control, the manipulators, the video system, the electrical power, and aspects of communication and data management. Questions of concept definition and evaluation are also examined.
Wherton, Joseph; Vijayaraghavan, Shanti; Morris, Joanne; Bhattacharya, Satya; Hanson, Philippa; Campbell-Richards, Desirée; Ramoutar, Seendy; Collard, Anna; Hodkinson, Isabel
2018-01-01
Background There is much interest in virtual consultations using video technology. Randomized controlled trials have shown video consultations to be acceptable, safe, and effective in selected conditions and circumstances. However, this model has rarely been mainstreamed and sustained in real-world settings. Objective The study sought to (1) define good practice and inform implementation of video outpatient consultations and (2) generate transferable knowledge about challenges to scaling up and routinizing this service model. Methods A multilevel, mixed-method study of Skype video consultations (micro level) was embedded in an organizational case study (meso level), taking account of national context and wider influences (macro level). The study followed the introduction of video outpatient consultations in three clinical services (diabetes, diabetes antenatal, and cancer surgery) in a National Health Service trust (covering three hospitals) in London, United Kingdom. Data sources included 36 national-level stakeholders (exploratory and semistructured interviews), longitudinal organizational ethnography (300 hours of observations; 24 staff interviews), 30 videotaped remote consultations, 17 audiotaped face-to-face consultations, and national and local documents. Qualitative data, analyzed using sociotechnical change theories, addressed staff and patient experience and organizational and system drivers. Quantitative data, analyzed via descriptive statistics, included uptake of video consultations by staff and patients and microcategorization of different kinds of talk (using the Roter interaction analysis system). Results When clinical, technical, and practical preconditions were met, video consultations appeared safe and were popular with some patients and staff. Compared with face-to-face consultations for similar conditions, video consultations were very slightly shorter, patients did slightly more talking, and both parties sometimes needed to make explicit things that typically remained implicit in a traditional encounter. Video consultations appeared to work better when the clinician and patient already knew and trusted each other. Some clinicians used Skype adaptively to respond to patient requests for ad hoc encounters in a way that appeared to strengthen supported self-management. The reality of establishing video outpatient services in a busy and financially stretched acute hospital setting proved more complex and time-consuming than originally anticipated. By the end of this study, between 2% and 22% of consultations were being undertaken remotely by participating clinicians. In the remainder, clinicians chose not to participate, or video consultations were considered impractical, technically unachievable, or clinically inadvisable. Technical challenges were typically minor but potentially prohibitive. Conclusions Video outpatient consultations appear safe, effective, and convenient for patients in situations where participating clinicians judge them clinically appropriate, but such situations are a fraction of the overall clinic workload. As with other technological innovations, some clinicians will adopt readily, whereas others will need incentives and support. There are complex challenges to embedding video consultation services within routine practice in organizations that are hesitant to change, especially in times of austerity. PMID:29625956
A Remote-Control Airship for Coastal and Environmental Research
NASA Astrophysics Data System (ADS)
Puleo, J. A.; O'Neal, M. A.; McKenna, T. E.; White, T.
2008-12-01
The University of Delaware recently acquired an 18 m (60 ft) remote-control airship capable of carrying a 36 kg (120 lb) scientific payload for coastal and environmental research. By combining the benefits of tethered balloons (stable dwell time) and powered aircraft (ability to navigate), the platform allows for high-resolution data collection in both time and space. The platform was developed by Galaxy Blimps, LLC of Dallas, TX for collecting high-definition video of sporting events. The airship can fly to altitudes of at least 600 m (2000 ft) reaching speeds between zero and 18 m/s (35 knots) in winds up to 13 m/s (25 knots). Using a hand-held console and radio transmitter, a ground-based operator can manipulate the orientation and throttle of two gasoline engines, and the orientation of four fins. Airship location is delivered to the operator through a data downlink from an onboard altimeter and global positioning system (GPS) receiver. Scientific payloads are easily attached to a rail system on the underside of the blimp. Data collection can be automated (fixed time intervals) or triggered by a second operator using a second hand-held console. Data can be stored onboard or transmitted in real-time to a ground-based computer. The first science mission (Fall 2008) is designed to collect images of tidal inundation of a salt marsh to support numerical modeling of water quality in the Murderkill River Estuary in Kent County, Delaware (a tributary of Delaware Bay in the USA Mid-Atlantic region). Time sequenced imagery will be collected by a ten-megapixel camera and a thermal- infrared imager mounted in separate remote-control, gyro-stabilized camera mounts on the blimp. Live video- feeds will be transmitted to the instrument operator on the ground. Resulting time series data will ultimately be used to compare/update independent estimates of inundation based on LiDAR elevations and a suite of tide and temperature gauges.
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1993-01-01
Current plans indicate that there will be a large number of life science experiments carried out during the thirty year-long mission of the Biological Flight Research Laboratory (BFRL) on board Space Station Freedom (SSF). Non-human life science experiments will be performed in the BFRL. Two distinct types of activities have already been identified for this facility: (1) collect, store, distribute, analyze and manage engineering and science data from the Habitats, Glovebox and Centrifuge, (2) perform a broad range of remote science activities in the Glovebox and Habitat chambers in conjunction with the remotely located principal investigator (PI). These activities require extensive video coverage, viewing and/or recording and distribution to video displays on board SSF and to the ground. This paper concentrates mainly on the second type of activity. Each of the two BFRL habitat racks are designed to be configurable for either six rodent habitats per rack, four plant habitats per rack, or a combination of the above. Two video cameras will be installed in each habitat with a spare attachment for a third camera when needed. Therefore, a video system that can accommodate up to 12-18 camera inputs per habitat rack must be considered.
High efficiency video coding for ultrasound video communication in m-health systems.
Panayides, A; Antoniou, Z; Pattichis, M S; Pattichis, C S; Constantinides, A G
2012-01-01
Emerging high efficiency video compression methods and wider availability of wireless network infrastructure will significantly advance existing m-health applications. For medical video communications, the emerging video compression and network standards support low-delay and high-resolution video transmission, at the clinically acquired resolution and frame rates. Such advances are expected to further promote the adoption of m-health systems for remote diagnosis and emergency incidents in daily clinical practice. This paper compares the performance of the emerging high efficiency video coding (HEVC) standard to the current state-of-the-art H.264/AVC standard. The experimental evaluation, based on five atherosclerotic plaque ultrasound videos encoded at QCIF, CIF, and 4CIF resolutions demonstrates that 50% reductions in bitrate requirements is possible for equivalent clinical quality.
ISS Expedition 42 Time Lapse Video of Earth
2015-05-18
This time lapse video taken during ISS Expedition 42 is assembled from JSC still photo collection (still photos iss042e218184 - iss042e219070 ). Shows night time views over Egypt, Sinai, Saudi Arabia, Jordan and Israel. Space Station Remote Manipulator System (SSRMS) or Canadarm in foreground.
Development of a telepresence robot for medical consultation
NASA Astrophysics Data System (ADS)
Bugtai, Nilo T.; Ong, Aira Patrice R.; Angeles, Patrick Bryan C.; Cervera, John Keen P.; Ganzon, Rachel Ann E.; Villanueva, Carlos A. G.; Maniquis, Samuel Nazirite F.
2017-02-01
There are numerous efforts to add value for telehealth applications in the country. In this study, the design of a telepresence doctor to facilitate remote medical consultations in the wards of Philippine General Hospital is proposed. This includes the design of a robot capable of performing a medical consultation with clear audio and video information for both ends. It also provides the operating doctor full control of the telepresence robot and gives a user-friendly interface for the controlling doctor. The results have shown that it provides a stable and reliable mobile medical service through the use of the telepresence robot.
Pelletier, Dominique; Leleu, Kévin; Mallet, Delphine; Mou-Tham, Gérard; Hervé, Gilles; Boureau, Matthieu; Guilpart, Nicolas
2012-01-01
Observing spatial and temporal variations of marine biodiversity from non-destructive techniques is central for understanding ecosystem resilience, and for monitoring and assessing conservation strategies, e.g. Marine Protected Areas. Observations are generally obtained through Underwater Visual Censuses (UVC) conducted by divers. The problems inherent to the presence of divers have been discussed in several papers. Video techniques are increasingly used for observing underwater macrofauna and habitat. Most video techniques that do not need the presence of a diver use baited remote systems. In this paper, we present an original video technique which relies on a remote unbaited rotating remote system including a high definition camera. The system is set on the sea floor to record images. These are then analysed at the office to quantify biotic and abiotic sea bottom cover, and to identify and count fish species and other species like marine turtles. The technique was extensively tested in a highly diversified coral reef ecosystem in the South Lagoon of New Caledonia, based on a protocol covering both protected and unprotected areas in major lagoon habitats. The technique enabled to detect and identify a large number of species, and in particular fished species, which were not disturbed by the system. Habitat could easily be investigated through the images. A large number of observations could be carried out per day at sea. This study showed the strong potential of this non obtrusive technique for observing both macrofauna and habitat. It offers a unique spatial coverage and can be implemented at sea at a reasonable cost by non-expert staff. As such, this technique is particularly interesting for investigating and monitoring coastal biodiversity in the light of current conservation challenges and increasing monitoring needs.
Use of an UROV to develop 3-D optical models of submarine environments
NASA Astrophysics Data System (ADS)
Null, W. D.; Landry, B. J.
2017-12-01
The ability to rapidly obtain high-fidelity bathymetry is crucial for a broad range of engineering, scientific, and defense applications ranging from bridge scour, bedform morphodynamics, and coral reef health to unexploded ordnance detection and monitoring. The present work introduces the use of an Underwater Remotely Operated Vehicle (UROV) to develop 3-D optical models of submarine environments. The UROV used a Raspberry Pi camera mounted to a small servo which allowed for pitch control. Prior to video data collection, in situ camera calibration was conducted with the system. Multiple image frames were extracted from the underwater video for 3D reconstruction using Structure from Motion (SFM). This system provides a simple and cost effective solution to obtaining detailed bathymetry in optically clear submarine environments.
Ames life science telescience testbed evaluation
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Johnson, Vicki; Vogelsong, Kristofer H.; Froloff, Walt
1989-01-01
Eight surrogate spaceflight mission specialists participated in a real-time evaluation of remote coaching using the Ames Life Science Telescience Testbed facility. This facility consisted of three remotely located nodes: (1) a prototype Space Station glovebox; (2) a ground control station; and (3) a principal investigator's (PI) work area. The major objective of this project was to evaluate the effectiveness of telescience techniques and hardware to support three realistic remote coaching science procedures: plant seed germinator charging, plant sample acquisition and preservation, and remote plant observation with ground coaching. Each scenario was performed by a subject acting as flight mission specialist, interacting with a payload operations manager and a principal investigator expert. All three groups were physically isolated from each other yet linked by duplex audio and color video communication channels and networked computer workstations. Workload ratings were made by the flight and ground crewpersons immediately after completing their assigned tasks. Time to complete each scientific procedural step was recorded automatically. Two expert observers also made performance ratings and various error assessments. The results are presented and discussed.
A Wide-field Camera and Fully Remote Operations at the Wyoming Infrared Observatory
NASA Astrophysics Data System (ADS)
Findlay, Joseph R.; Kobulnicky, Henry A.; Weger, James S.; Bucher, Gerald A.; Perry, Marvin C.; Myers, Adam D.; Pierce, Michael J.; Vogel, Conrad
2016-11-01
Upgrades at the 2.3 meter Wyoming Infrared Observatory telescope have provided the capability for fully remote operations by a single operator from the University of Wyoming campus. A line-of-sight 300 Megabit s-1 11 GHz radio link provides high-speed internet for data transfer and remote operations that include several realtime video feeds. Uninterruptable power is ensured by a 10 kVA battery supply for critical systems and a 55 kW autostart diesel generator capable of running the entire observatory for up to a week. The construction of a new four-element prime-focus corrector with fused-silica elements allows imaging over a 40‧ field of view with a new 40962 UV-sensitive prime-focus camera and filter wheel. A new telescope control system facilitates the remote operations model and provides 20″ rms pointing over the usable sky. Taken together, these improvements pave the way for a new generation of sky surveys supporting space-based missions and flexible-cadence observations advancing emerging astrophysical priorities such as planet detection, quasar variability, and long-term time-domain campaigns.
Radar Remote Sensing of Waves and Currents in the Nearshore Zone
2006-01-01
and application of novel microwave, acoustic, and optical remote sensing techniques. The objectives of this effort are to determine the extent to which...Doppler radar techniques are useful for nearshore remote sensing applications. Of particular interest are estimates of surf zone location and extent...surface currents, waves, and bathymetry. To date, optical (video) techniques have been the primary remote sensing technology used for these applications. A key advantage of the radar is its all weather day-night operability.
Achieving an Optimal Medium Altitude UAV Force Balance in Support of COIN Operations
2009-02-02
and execute operations. UAS with common data links and remote video terminals (RVTs) provide input to the common operational picture (COP) and...full-motion video (FMV) is intuitive to many tactical warfighters who have used similar sensors in manned aircraft. Modern data links allow the video ...Document (AFDD) 2-9. Intelligence, Surveillance, and Reconnaissance Operations, 17 July 2007. Baldor, Lolita C. “Increased UAV reliance evident in
Neil A. Clark; Sang-Mook Lee
2004-01-01
This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...
2015-09-30
metrics for key age/ sex classes: 1) Width profiles for adult females, specifically comparing those with (lactating) and without dependent young...hexacopter using remote controls at a height 3 of ~100ft, aided by live video output from the hexacopter that will be monitored on a portable ground unit...Blainville’s beaked whales can be readily assigned to age/ sex classes from photographs of dentition and scarring (Claridge 2013), enabling us to link
NASA Technical Reports Server (NTRS)
Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.
1990-01-01
Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.
NASA Technical Reports Server (NTRS)
Johnston, James C.; Rosenthal, Bruce N.; Bonner, Mary JO; Hahn, Richard C.; Herbach, Bruce
1989-01-01
A series of ground-based telepresence experiments have been performed to determine the minimum video frame rate and resolution required for the successive performance of materials science experiments in space. The approach used is to simulate transmission between earth and space station with transmission between laboratories on earth. The experiments include isothermal dendrite growth, physical vapor transport, and glass melting. Modifications of existing apparatus, software developed, and the establishment of an inhouse network are reviewed.
The Light Microscopy Module: An On-Orbit Multi-User Microscope Facility
NASA Technical Reports Server (NTRS)
Motil, Susan M.; Snead, John H.
2002-01-01
The Light Microscopy Module (LMM) is planned as a remotely controllable on-orbit microscope subrack facility, allowing flexible scheduling and operation of fluids and biology experiments within the Fluids and Combustion Facility (FCF) Fluids Integrated Rack (FIR) on the International Space Station (ISS). The LMM will be the first integrated payload with the FIR to conduct four fluid physics experiments. A description of the LMM diagnostic capabilities, including video microscopy, interferometry, laser tweezers, confocal, and spectrophotometry, will be provided.
Robot-assisted home hazard assessment for fall prevention: a feasibility study.
Sadasivam, Rajani S; Luger, Tana M; Coley, Heather L; Taylor, Benjamin B; Padir, Taskin; Ritchie, Christine S; Houston, Thomas K
2014-01-01
We examined the feasibility of using a remotely manoeuverable robot to make home hazard assessments for fall prevention. We employed use-case simulations to compare robot assessments with in-person assessments. We screened the homes of nine elderly patients (aged 65 years or more) for fall risks using the HEROS screening assessment. We also assessed the participants' perspectives of the remotely-operated robot in a survey. The nine patients had a median Short Blessed Test score of 8 (interquartile range, IQR 2-20) and a median Life-Space Assessment score of 46 (IQR 27-75). Compared to the in-person assessment (mean = 4.2 hazards identified per participant), significantly more home hazards were perceived in the robot video assessment (mean = 7.0). Only two checklist items (adequate bedroom lighting and a clear path from bed to bathroom) had more than 60% agreement between in-person and robot video assessment. Participants were enthusiastic about the robot and did not think it violated their privacy. The study found little agreement between the in-person and robot video hazard assessments. However, it identified several research questions about how to best use remotely-operated robots.
Telehealth: voice therapy using telecommunications technology.
Mashima, Pauline A; Birkmire-Peters, Deborah P; Syms, Mark J; Holtel, Michael R; Burgess, Lawrence P A; Peters, Leslie J
2003-11-01
Telehealth offers the potential to meet the needs of underserved populations in remote regions. The purpose of this study was a proof-of-concept to determine whether voice therapy can be delivered effectively remotely. Treatment outcomes were evaluated for a vocal rehabilitation protocol delivered under 2 conditions: with the patient and clinician interacting within the same room (conventional group) and with the patient and clinician in separate rooms, interacting in real time via a hard-wired video camera and monitor (video teleconference group). Seventy-two patients with voice disorders served as participants. Based on evaluation by otolaryngologists, 31 participants were diagnosed with vocal nodules, 29 were diagnosed with edema, 9 were diagnosed with unilateral vocal fold paralysis, and 3 presented with vocal hyperfunction with no laryngeal pathology. Fifty-one participants (71%) completed the vocal rehabilitation protocol. Outcome measures included perceptual judgments of voice quality, acoustic analyses of voice, patient satisfaction ratings, and fiber-optic laryngoscopy. There were no differences in outcome measures between the conventional group and the remote video teleconference group. Participants in both groups showed positive changes on all outcome measures after completing the vocal rehabilitation protocol. Reasons for participants discontinuing therapy prematurely provided support for the telehealth model of service delivery.
The remote characterization of vegetation using Unmanned Aerial Vehicle photography
USDA-ARS?s Scientific Manuscript database
Unmanned Aerial Vehicles (UAVs) can fly in place of piloted aircraft to gather remote sensing information on vegetation characteristics. The type of sensors flown depends on the instrument payload capacity available, so that, depending on the specific UAV, it is possible to obtain video, aerial phot...
Determining the Discharge Rate from a Submerged Oil Leaks using ROV Video and CFD study
NASA Astrophysics Data System (ADS)
Saha, Pankaj; Shaffer, Frank; Shahnam, Mehrdad; Savas, Omer; Devites, Dave; Steffeck, Timothy
2016-11-01
The current paper reports a technique to measure the discharge rate by analyzing the video from a Remotely Operated Vehicle (ROV). The technique uses instantaneous images from ROV video to measure the velocity of visible features (turbulent eddies) along the boundary of an oil leak jet and subsequently classical theory of turbulent jets is imposed to determine the discharge rate. The Flow Rate Technical Group (FRTG) Plume Team developed this technique that manually tracked the visible features and produced the first accurate government estimates of the oil discharge rate from the Deepwater Horizon (DWH). For practical application this approach needs automated control. Experiments were conducted at UC Berkeley and OHMSETT that recorded high speed, high resolution video of submerged dye-colored water or oil jets and subsequently, measured the velocity data employing LDA and PIV software. Numerical simulation have been carried out using experimental submerged turbulent oil jets flow conditions employing LES turbulence closure and VOF interface capturing technique in OpenFOAM solver. The CFD results captured jet spreading angle and jet structures in close agreement with the experimental observations. The work was funded by NETL and DOI Bureau of Safety and Environmental Enforcement (BSEE).
Use of a UAV-mounted video camera to assess feeding behavior of Raramuri Criollo cows
USDA-ARS?s Scientific Manuscript database
Interest in use of unmanned aerial vehicles in science has increased in recent years. It is predicted that they will be a preferred remote sensing platform for applications that inform sustainable rangeland management in the future. The objective of this study was to determine whether UAV video moni...
Augmenting the access grid using augmented reality
NASA Astrophysics Data System (ADS)
Li, Ying
2012-01-01
The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality (AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed AG users to interactively and collaboratively participate in the AR environment with better experience.
Design and implementation of H.264 based embedded video coding technology
NASA Astrophysics Data System (ADS)
Mao, Jian; Liu, Jinming; Zhang, Jiemin
2016-03-01
In this paper, an embedded system for remote online video monitoring was designed and developed to capture and record the real-time circumstances in elevator. For the purpose of improving the efficiency of video acquisition and processing, the system selected Samsung S5PV210 chip as the core processor which Integrated graphics processing unit. And the video was encoded with H.264 format for storage and transmission efficiently. Based on S5PV210 chip, the hardware video coding technology was researched, which was more efficient than software coding. After running test, it had been proved that the hardware video coding technology could obviously reduce the cost of system and obtain the more smooth video display. It can be widely applied for the security supervision [1].
NASA Astrophysics Data System (ADS)
Sun, Hong; Wu, Qian-zhong
2013-09-01
In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.
Using collaborative technologies in remote lab delivery systems for topics in automation
NASA Astrophysics Data System (ADS)
Ashby, Joe E.
Lab exercises are a pedagogically essential component of engineering and technology education. Distance education remote labs are being developed which enable students to access lab facilities via the Internet. Collaboration, students working in teams, enhances learning activity through the development of communication skills, sharing observations and problem solving. Web meeting communication tools are currently used in remote labs. The problem identified for investigation was that no standards of practice or paradigms exist to guide remote lab designers in the selection of collaboration tools that best support learning achievement. The goal of this work was to add to the body of knowledge involving the selection and use of remote lab collaboration tools. Experimental research was conducted where the participants were randomly assigned to three communication treatments and learning achievement was measured via assessments at the completion of each of six remote lab based lessons. Quantitative instruments used for assessing learning achievement were implemented, along with a survey to correlate user preference with collaboration treatments. A total of 53 undergraduate technology students worked in two-person teams, where each team was assigned one of the treatments, namely (a) text messaging chat, (b) voice chat, or (c) webcam video with voice chat. Each had little experience with the subject matter involving automation, but possessed the necessary technical background. Analysis of the assessment score data included mean and standard deviation, confirmation of the homogeneity of variance, a one-way ANOVA test and post hoc comparisons. The quantitative and qualitative data indicated that text messaging chat negatively impacted learning achievement and that text messaging chat was not preferred. The data also suggested that the subjects were equally divided on preference to voice chat verses webcam video with voice chat. To the end of designing collaborative communication tools for remote labs involving automation equipment, the results of this work points to making voice chat the default method of communication; but the webcam video with voice chat option should be included. Standards are only beginning to be developed for the design of remote lab systems. Research, design and innovation involving collaboration and presence should be included.
Development and preliminary validation of an interactive remote physical therapy system.
Mishra, Anup K; Skubic, Marjorie; Abbott, Carmen
2015-01-01
In this paper, we present an interactive physical therapy system (IPTS) for remote quantitative assessment of clients in the home. The system consists of two different interactive interfaces connected through a network, for a real-time low latency video conference using audio, video, skeletal, and depth data streams from a Microsoft Kinect. To test the potential of IPTS, experiments were conducted with 5 independent living senior subjects in Kansas City, MO. Also, experiments were conducted in the lab to validate the real-time biomechanical measures calculated using the skeletal data from the Microsoft Xbox 360 Kinect and Microsoft Xbox One Kinect, with ground truth data from a Vicon motion capture system. Good agreements were found in the validation tests. The results show potential capabilities of the IPTS system to provide remote physical therapy to clients, especially older adults, who may find it difficult to visit the clinic.
Collaborative Information Technologies
NASA Astrophysics Data System (ADS)
Meyer, William; Casper, Thomas
1999-11-01
Significant effort has been expended to provide infrastructure and to facilitate the remote collaborations within the fusion community and out. Through the Office of Fusion Energy Science Information Technology Initiative, communication technologies utilized by the fusion community are being improved. The initial thrust of the initiative has been collaborative seminars and meetings. Under the initiative 23 sites, both laboratory and university, were provided with hardware required to remotely view, or project, documents being presented. The hardware is capable of delivering documents to a web browser, or to compatible hardware, over ESNET in an access controlled manner. The ability also exists for documents to originate from virtually any of the collaborating sites. In addition, RealNetwork servers are being tested to provide audio and/or video, in a non-interactive environment with MBONE providing two-way interaction where needed. Additional effort is directed at remote distributed computing, file systems, security, and standard data storage and retrieval methods. This work supported by DoE contract No. W-7405-ENG-48
WWWinda Orchestrator: a mechanism for coordinating distributed flocks of Java Applets
NASA Astrophysics Data System (ADS)
Gutfreund, Yechezkal-Shimon; Nicol, John R.
1997-01-01
The WWWinda Orchestrator is a simple but powerful tool for coordinating distributed Java applets. Loosely derived from the Linda programming language developed by David Gelernter and Nicholas Carriero of Yale, WWWinda implements a distributed shared object space called TupleSpace where applets can post, read, or permanently store arbitrary Java objects. In this manner, applets can easily share information without being aware of the underlying communication mechanisms. WWWinda is a very useful for orchestrating flocks of distributed Java applets. Coordination event scan be posted to WWWinda TupleSpace and used to orchestrate the actions of remote applets. Applets can easily share information via the TupleSpace. The technology combines several functions in one simple metaphor: distributed web objects, remote messaging between applets, distributed synchronization mechanisms, object- oriented database, and a distributed event signaling mechanisms. WWWinda can be used a s platform for implementing shared VRML environments, shared groupware environments, controlling remote devices such as cameras, distributed Karaoke, distributed gaming, and shared audio and video experiences.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt
2013-01-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
NASA Astrophysics Data System (ADS)
Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.
2013-12-01
The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video-teleconferences, presentation slides, documents, spreadsheets or laptop screens. SAGE is cross-platform, community-driven, open-source visualization and collaboration middleware that utilizes shared national and international cyberinfrastructure for the advancement of scientific research and education.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen makes adjustments on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Worthington (left) and Kenny Allen work on one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Kenny Allen stands in the center console area of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric-drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operator Rick Wetherington sits in the center console seat of one of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with an electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
2004-05-19
KENNEDY SPACE CENTER, FLA. -- Johnson Controls operators Rick Wetherington (left) and Kenny Allen work on two of the recently acquired Contraves-Goerz Kineto Tracking Mounts (KTM). There are 10 KTMs certified for use on the Eastern Range. The KTM, which is trailer-mounted with a center console/seat and electric drive tracking mount, includes a two-camera, camera control unit that will be used during launches. The KTM is designed for remotely controlled operations and offers a combination of film, shuttered and high-speed digital video, and FLIR cameras configured with 20-inch to 150-inch focal length lenses. The KTMs are generally placed in the field and checked out the day before a launch and manned 3 hours prior to liftoff.
Transcontinental anaesthesia: a pilot study.
Hemmerling, T M; Arbeid, E; Wehbe, M; Cyr, S; Giunta, F; Zaouter, C
2013-05-01
Although telemedicine is one of the key initiatives of the World Health Organization, no study has explored the feasibility and efficacy of teleanaesthesia. This bi-centre pilot study investigates the feasibility of transcontinental anaesthesia. Twenty patients aged ≥ 18 yr undergoing elective thyroid surgery for ≥ 30 min were enrolled in this study. The remote and local set-up was composed of a master-computer (Montreal) and a slave-computer (Pisa). Standard Internet connection, remote desktop control, and video conference software were used. All patients received total i.v. anaesthesia controlled remotely (Montreal). The main outcomes were feasibility, clinical performance, and controller performance of transcontinental anaesthesia. The clinical performance of hypnosis control was the efficacy to maintain bispectral index (BIS) at 45: 'excellent', 'good', 'poor', and 'inadequate' control represented BIS values within 10, from 11 to 20, from 21 to 30, or >30% from target. The clinical performance of analgesia was the efficacy to maintain Analgoscore values at 0 (-9 to 9); -3 to +3 representing 'excellent' pain control, -3 to -6 and +3 to +6 representing 'good' pain control, and -6 to -9 and +6 to +9 representing 'insufficient' pain control. The controller performance was evaluated using Varvel parameters. Transcontinental anaesthesia was successful in all 20 consecutive patients. The clinical performance of hypnosis showed an 'excellent and good' control for 69% of maintenance time, and the controller performance showed an average global performance index of 57. The clinical performance of analgesia was 'excellent and good' for 92% of maintenance time, and the controller performance showed a global performance index of 1118. Transcontinental anaesthesia is feasible; control of anaesthesia shows good performance indexes. Clinical registration number NCT01331096.
Davis, Matthew Christopher; Can, Dang D; Pindrik, Jonathan; Rocque, Brandon G; Johnston, James M
2016-02-01
Technology allowing a remote, experienced surgeon to provide real-time guidance to local surgeons has great potential for training and capacity building in medical centers worldwide. Virtual interactive presence and augmented reality (VIPAR), an iPad-based tool, allows surgeons to provide long-distance, virtual assistance wherever a wireless internet connection is available. Local and remote surgeons view a composite image of video feeds at each station, allowing for intraoperative telecollaboration in real time. Local and remote stations were established in Ho Chi Minh City, Vietnam, and Birmingham, Alabama, as part of ongoing neurosurgical collaboration. Endoscopic third ventriculostomy with choroid plexus coagulation with VIPAR was used for subjective and objective evaluation of system performance. VIPAR allowed both surgeons to engage in complex visual and verbal communication during the procedure. Analysis of 5 video clips revealed video delay of 237 milliseconds (range, 93-391 milliseconds) relative to the audio signal. Excellent image resolution allowed the remote neurosurgeon to visualize all critical anatomy. The remote neurosurgeon could gesture to structures with no detectable difference in accuracy between stations, allowing for submillimeter precision. Fifteen endoscopic third ventriculostomy with choroid plexus coagulation procedures have been performed with the use of VIPAR between Vietnam and the United States, with no significant complications. 80% of these patients remain shunt-free. Evolving technologies that allow long-distance, intraoperative guidance, and knowledge transfer hold great potential for highly efficient international neurosurgical education. VIPAR is one example of an inexpensive, scalable platform for increasing global neurosurgical capacity. Efforts to create a network of Vietnamese neurosurgeons who use VIPAR for collaboration are underway. Copyright © 2016 Elsevier Inc. All rights reserved.
Quantifying Northern Goshawk diets using remote cameras and observations from blinds
Rogers, A.S.; DeStefano, S.; Ingraldi, M.F.
2005-01-01
Raptor diet is most commonly measured indirectly, by analyzing castings and prey remains, or directly, by observing prey deliveries from blinds. Indirect methods are not only time consuming, but there is evidence to suggest these methods may overestimate certain prey taxa within raptor diet. Remote video surveillance systems have been developed to aid in monitoring and data collection, but their use in field situations can be challenging and is often untested. To investigate diet and prey delivery rates of Northern Goshawks (Accipiter gentilis), we operated 10 remote camera systems at occupied nests during the breeding seasons of 1999 and 2000 in east-central Arizona. We collected 2458 hr of useable video and successfully identified 627 (93%) prey items at least to Class (Aves, Mammalia, or Reptilia). Of prey items identified to genus, we identified 344 (81%) mammals, 62 (15%) birds, and 16 (4%) reptiles. During camera operation, we also conducted observations from blinds at a subset of five nests to compare the relative efficiency and precision of both methods. Limited observations from blinds yielded fewer prey deliveries, and therefore, lower delivery rates (0.16 items/hr) than simultaneous video footage (0.28 items/hr). Observations from blinds resulted in fewer prey identified to the genus and species levels, when compared to data collected by remote cameras. Cameras provided a detailed and close view of nests, allowed for simultaneous recording at multiple nests, decreased observer bias and fatigue, and provided a permanent archive of data. ?? 2005 The Raptor Research Foundation, Inc.
Telestroke in Northern Alberta: a two year experience with remote hospitals.
Khan, Khurshid; Shuaib, Ashfaq; Whittaker, Tammy; Saqqur, Maher; Jeerakathil, Thomas; Butcher, Ken; Crumley, Patrick
2010-11-01
Thrombolysis in acute ischemic stroke is usually performed in comprehensive stroke centres. Lack of stroke expertise in remote small hospitals may preclude thrombolysis. Telemedicine allows such management opportunities in distant hospitals. We report our experience in managing acute stroke over a two-year time period with telestroke. The University of Alberta Hospital acted as the 'hub' and seven remote hospitals as 'spoke'. The neurologist at the 'hub' provided stroke expertise to the local physician using either a two-way video link or telephone. Cranial CT scans were transmitted to 'hub'. Education sessions were held before the initiation of the program. Of 210 patients 44 (21%) received thrombolysis at the 'spoke' sites. In 34/44 (77%) two-way video link was available while in 10/44 (23%) telephone was used. Five (11.4%) patients experienced intracranial hemorrhage after thrombolysis, 2 (4.5%) were symptomatic. Favorable (mRS=0-1) outcome at three months was 16/40 (40%) and mortality was 9/40 (22.5%). Four patients were lost to follow-up. There was no significant three months outcome difference between two-way video link and telephone consultation (P = 0.689). Over two years the number of acute stroke transfers decreased from 144 to 15 at one of the 'spoke' sites, a 92.5% decline. It is possible to successfully treat patients with acute ischemic stroke at remote sites through videoconferencing or telephone consultation. Telestroke can also lead to a significant reduction in the number of patients requiring transfer to a tertiary care centre.
Greenhalgh, Trisha; Shaw, Sara; Wherton, Joseph; Vijayaraghavan, Shanti; Morris, Joanne; Bhattacharya, Satya; Hanson, Philippa; Campbell-Richards, Desirée; Ramoutar, Seendy; Collard, Anna; Hodkinson, Isabel
2018-04-17
There is much interest in virtual consultations using video technology. Randomized controlled trials have shown video consultations to be acceptable, safe, and effective in selected conditions and circumstances. However, this model has rarely been mainstreamed and sustained in real-world settings. The study sought to (1) define good practice and inform implementation of video outpatient consultations and (2) generate transferable knowledge about challenges to scaling up and routinizing this service model. A multilevel, mixed-method study of Skype video consultations (micro level) was embedded in an organizational case study (meso level), taking account of national context and wider influences (macro level). The study followed the introduction of video outpatient consultations in three clinical services (diabetes, diabetes antenatal, and cancer surgery) in a National Health Service trust (covering three hospitals) in London, United Kingdom. Data sources included 36 national-level stakeholders (exploratory and semistructured interviews), longitudinal organizational ethnography (300 hours of observations; 24 staff interviews), 30 videotaped remote consultations, 17 audiotaped face-to-face consultations, and national and local documents. Qualitative data, analyzed using sociotechnical change theories, addressed staff and patient experience and organizational and system drivers. Quantitative data, analyzed via descriptive statistics, included uptake of video consultations by staff and patients and microcategorization of different kinds of talk (using the Roter interaction analysis system). When clinical, technical, and practical preconditions were met, video consultations appeared safe and were popular with some patients and staff. Compared with face-to-face consultations for similar conditions, video consultations were very slightly shorter, patients did slightly more talking, and both parties sometimes needed to make explicit things that typically remained implicit in a traditional encounter. Video consultations appeared to work better when the clinician and patient already knew and trusted each other. Some clinicians used Skype adaptively to respond to patient requests for ad hoc encounters in a way that appeared to strengthen supported self-management. The reality of establishing video outpatient services in a busy and financially stretched acute hospital setting proved more complex and time-consuming than originally anticipated. By the end of this study, between 2% and 22% of consultations were being undertaken remotely by participating clinicians. In the remainder, clinicians chose not to participate, or video consultations were considered impractical, technically unachievable, or clinically inadvisable. Technical challenges were typically minor but potentially prohibitive. Video outpatient consultations appear safe, effective, and convenient for patients in situations where participating clinicians judge them clinically appropriate, but such situations are a fraction of the overall clinic workload. As with other technological innovations, some clinicians will adopt readily, whereas others will need incentives and support. There are complex challenges to embedding video consultation services within routine practice in organizations that are hesitant to change, especially in times of austerity. ©Trisha Greenhalgh, Sara Shaw, Joseph Wherton, Shanti Vijayaraghavan, Joanne Morris, Satya Bhattacharya, Philippa Hanson, Desirée Campbell-Richards, Seendy Ramoutar, Anna Collard, Isabel Hodkinson. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.04.2018.
ERIC Educational Resources Information Center
El Shamy, Usama; Abdoun, Tarek; McMartin, Flora; Pando, Miguel A.
2013-01-01
We report the results of a pilot study aimed at developing, implementing, and assessing an educational module that integrates remote major research instrumentation into undergraduate classes. Specifically, this study employs Internet Web-based technologies to allow for real-time video monitoring and execution of cutting-edge experiments. The…
Engineering Education Using a Remote Laboratory through the Internet
ERIC Educational Resources Information Center
Axaopoulos, Petros J.; Moutsopoulos, Konstantinos N.; Theodoridis, Michael P.
2012-01-01
An experiment using real hardware and under real test conditions can be remotely conducted by engineering students and other interested individuals in the world via the Internet and with the capability of live video streaming from the test site. The presentation of this innovative experiment refers to the determination of the current voltage…
Mobile Augmented Communication for Remote Collaboration in a Physical Work Context
ERIC Educational Resources Information Center
Pejoska-Laajola, Jana; Reponen, Sanna; Virnes, Marjo; Leinonen, Teemu
2017-01-01
Informal learning in a physical work context requires communication and collaboration that build on a common ground and an active awareness of a situation. We explored whether mobile video conversations augmented with on-screen drawing features were beneficial for improving communication and remote collaboration practices in the construction and…
NASA Astrophysics Data System (ADS)
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
NASA Technical Reports Server (NTRS)
1975-01-01
A program was conducted which included the design of a set of simplified simulation tasks, design of apparatus and breadboard TV equipment for task performance, and the implementation of a number of simulation tests. Performance measurements were made under controlled conditions and the results analyzed to permit evaluation of the relative merits (effectivity) of various TV systems. Burden factors were subsequently generated for each TV system to permit tradeoff evaluation of system characteristics against performance. For the general remote operation mission, the 2-view system is recommended. This system is characterized and the corresponding equipment specifications were generated.
Longo, G O; Floeter, S R
2012-10-01
This study compared remote underwater video and traditional direct diver observations to assess reef fish feeding impact on benthos across multiple functional groups within different trophic categories (e.g. herbivores, zoobenthivores and omnivores) and in two distinct reef systems: a subtropical rocky reef and a tropical coral reef. The two techniques were roughly equivalent, both detecting the species with higher feeding impact and recording similar bite rates, suggesting that reef fish feeding behaviour at the study areas are not strongly affected by the diver's presence. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.
Evaluation of line transect sampling based on remotely sensed data from underwater video
Bergstedt, R.A.; Anderson, D.R.
1990-01-01
We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.
NASA Technical Reports Server (NTRS)
2004-01-01
Topics covered include: Analysis of SSEM Sensor Data Using BEAM; Hairlike Percutaneous Photochemical Sensors; Video Guidance Sensors Using Remotely Activated Targets; Simulating Remote Sensing Systems; EHW Approach to Temperature Compensation of Electronics; Polymorphic Electronic Circuits; Micro-Tubular Fuel Cells; Whispering-Gallery-Mode Tunable Narrow-Band-Pass Filter; PVM Wrapper; Simulation of Hyperspectral Images; Algorithm for Controlling a Centrifugal Compressor; Hybrid Inflatable Pressure Vessel; Double-Acting, Locking Carabiners; Position Sensor Integral with a Linear Actuator; Improved Electromagnetic Brake; Flow Straightener for a Rotating-Drum Liquid Separator; Sensory-Feedback Exoskeletal Arm Controller; Active Suppression of Instabilities in Engine Combustors; Fabrication of Robust, Flat, Thinned, UV-Imaging CCDs; Chemical Thinning Process for Fabricating UV-Imaging CCDs; Pseudoslit Spectrometer; Waste-Heat-Driven Cooling Using Complex Compound Sorbents; Improved Refractometer for Measuring Temperatures of Drops; Semiconductor Lasers Containing Quantum Wells in Junctions; Phytoplankton-Fluorescence-Lifetime Vertical Profiler; Hexagonal Pixels and Indexing Scheme for Binary Images; Finding Minimum-Power Broadcast Trees for Wireless Networks; and Automation of Design Engineering Processes.
Charter for Systems Engineer Working Group
NASA Technical Reports Server (NTRS)
Suffredini, Michael T.; Grissom, Larry
2015-01-01
This charter establishes the International Space Station Program (ISSP) Mobile Servicing System (MSS) Systems Engineering Working Group (SEWG). The MSS SEWG is established to provide a mechanism for Systems Engineering for the end-to-end MSS function. The MSS end-to-end function includes the Space Station Remote Manipulator System (SSRMS), the Mobile Remote Servicer (MRS) Base System (MBS), Robotic Work Station (RWS), Special Purpose Dexterous Manipulator (SPDM), Video Signal Converters (VSC), and Operations Control Software (OCS), the Mobile Transporter (MT), and by interfaces between and among these elements, and United States On-Orbit Segment (USOS) distributed systems, and other International Space Station Elements and Payloads, (including the Power Data Grapple Fixtures (PDGFs), MSS Capture Attach System (MCAS) and the Mobile Transporter Capture Latch (MTCL)). This end-to-end function will be supported by the ISS and MSS ground segment facilities. This charter defines the scope and limits of the program authority and document control that is delegated to the SEWG and it also identifies the panel core membership and specific operating policies.
Peterson, Courtney M; Apolzan, John W; Wright, Courtney; Martin, Corby K
2016-11-01
We conducted two studies to test the validity, reliability, feasibility and acceptability of using video chat technology to quantify dietary and pill-taking (i.e. supplement and medication) adherence. In study 1, we investigated whether video chat technology can accurately quantify adherence to dietary and pill-taking interventions. Mock study participants ate food items and swallowed pills, while performing randomised scripted 'cheating' behaviours to mimic non-adherence. Monitoring was conducted in a cross-over design, with two monitors watching in-person and two watching remotely by Skype on a smartphone. For study 2, a twenty-two-item online survey was sent to a listserv with more than 20 000 unique email addresses of past and present study participants to assess the feasibility and acceptability of the technology. For the dietary adherence tests, monitors detected 86 % of non-adherent events (sensitivity) in-person v. 78 % of events via video chat monitoring (P=0·12), with comparable inter-rater agreement (0·88 v. 0·85; P=0·62). However, for pill-taking, non-adherence trended towards being more easily detected in-person than by video chat (77 v. 60 %; P=0·08), with non-significantly higher inter-rater agreement (0·85 v. 0·69; P=0·21). Survey results from study 2 (n 1076 respondents; ≥5 % response rate) indicated that 86·4 % of study participants had video chatting hardware, 73·3 % were comfortable using the technology and 79·8 % were willing to use it for clinical research. Given the capability of video chat technology to reduce participant burden and outperform other adherence monitoring methods such as dietary self-report and pill counts, video chatting is a novel and promising platform to quantify dietary and pill-taking adherence.
Shared virtual environments for telerehabilitation.
Popescu, George V; Burdea, Grigore; Boian, Rares
2002-01-01
Current VR telerehabilitation systems use offline remote monitoring from the clinic and patient-therapist videoconferencing. Such "store and forward" and video-based systems cannot implement medical services involving patient therapist direct interaction. Real-time telerehabilitation applications (including remote therapy) can be developed using a shared Virtual Environment (VE) architecture. We developed a two-user shared VE for hand telerehabilitation. Each site has a telerehabilitation workstation with a videocamera and a Rutgers Master II (RMII) force feedback glove. Each user can control a virtual hand and interact hapticly with virtual objects. Simulated physical interactions between therapist and patient are implemented using hand force feedback. The therapist's graphic interface contains several virtual panels, which allow control over the rehabilitation process. These controls start a videoconferencing session, collect patient data, or apply therapy. Several experimental telerehabilitation scenarios were successfully tested on a LAN. A Web-based approach to "real-time" patient telemonitoring--the monitoring portal for hand telerehabilitation--was also developed. The therapist interface is implemented as a Java3D applet that monitors patient hand movement. The monitoring portal gives real-time performance on off-the-shelf desktop workstations.
Communication network for decentralized remote tele-science during the Spacelab mission IML-2
NASA Technical Reports Server (NTRS)
Christ, Uwe; Schulz, Klaus-Juergen; Incollingo, Marco
1994-01-01
The ESA communication network for decentralized remote telescience during the Spacelab mission IML-2, called Interconnection Ground Subnetwork (IGS), provided data, voice conferencing, video distribution/conferencing and high rate data services to 5 remote user centers in Europe. The combination of services allowed the experimenters to interact with their experiments as they would normally do from the Payload Operations Control Center (POCC) at MSFC. In addition, to enhance their science results, they were able to make use of reference facilities and computing resources in their home laboratory, which typically are not available in the POCC. Characteristics of the IML-2 communications implementation were the adaptation to the different user needs based on modular service capabilities of IGS and the cost optimization for the connectivity. This was achieved by using a combination of traditional leased lines, satellite based VSAT connectivity and N-ISDN according to the simulation and mission schedule for each remote site. The central management system of IGS allows minimization of staffing and the involvement of communications personnel at the remote sites. The successful operation of IGS for IML-2 as a precursor network for the Columbus Orbital Facility (COF) has proven the concept for communications to support the operation of the COF decentralized scenario.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, S.; Lucero, R.; Glidewell, D.
1997-08-01
The Autoridad Regulataria Nuclear (ARN) and the United States Department of Energy (DOE) are cooperating on the development of a Remote Monitoring System for nuclear nonproliferation efforts. A Remote Monitoring System for spent fuel transfer will be installed at the Argentina Nuclear Power Station in Embalse, Argentina. The system has been designed by Sandia National Laboratories (SNL), with Los Alamos National Laboratory (LANL) and Oak Ridge National Laboratory (ORNL) providing gamma and neutron sensors. This project will test and evaluate the fundamental design and implementation of the Remote Monitoring System in its application to regional and international safeguards efficiency. Thismore » paper provides a description of the monitoring system and its functions. The Remote Monitoring System consists of gamma and neutron radiation sensors, RF systems, and video systems integrated into a coherent functioning whole. All sensor data communicate over an Echelon LonWorks Network to a single data logger. The Neumann DCM 14 video module is integrated into the Remote Monitoring System. All sensor and image data are stored on a Data Acquisition System (DAS) and archived and reviewed on a Data and Image Review Station (DIRS). Conventional phone lines are used as the telecommunications link to transmit on-site collected data and images to remote locations. The data and images are authenticated before transmission. Data review stations will be installed at ARN in Buenos Aires, Argentina, ABACC in Rio De Janeiro, IAEA Headquarters in Vienna, and Sandia National Laboratories in Albuquerque, New Mexico. 2 refs., 2 figs.« less
Experimental Validation: Subscale Aircraft Ground Facilities and Integrated Test Capability
NASA Technical Reports Server (NTRS)
Bailey, Roger M.; Hostetler, Robert W., Jr.; Barnes, Kevin N.; Belcastro, Celeste M.; Belcastro, Christine M.
2005-01-01
Experimental testing is an important aspect of validating complex integrated safety critical aircraft technologies. The Airborne Subscale Transport Aircraft Research (AirSTAR) Testbed is being developed at NASA Langley to validate technologies under conditions that cannot be flight validated with full-scale vehicles. The AirSTAR capability comprises a series of flying sub-scale models, associated ground-support equipment, and a base research station at NASA Langley. The subscale model capability utilizes a generic 5.5% scaled transport class vehicle known as the Generic Transport Model (GTM). The AirSTAR Ground Facilities encompass the hardware and software infrastructure necessary to provide comprehensive support services for the GTM testbed. The ground facilities support remote piloting of the GTM aircraft, and include all subsystems required for data/video telemetry, experimental flight control algorithm implementation and evaluation, GTM simulation, data recording/archiving, and audio communications. The ground facilities include a self-contained, motorized vehicle serving as a mobile research command/operations center, capable of deployment to remote sites when conducting GTM flight experiments. The ground facilities also include a laboratory based at NASA LaRC providing near identical capabilities as the mobile command/operations center, as well as the capability to receive data/video/audio from, and send data/audio to the mobile command/operations center during GTM flight experiments.
NASA Astrophysics Data System (ADS)
Babic, Z.; Pilipovic, R.; Risojevic, V.; Mirjanic, G.
2016-06-01
Honey bees have crucial role in pollination across the world. This paper presents a simple, non-invasive, system for pollen bearing honey bee detection in surveillance video obtained at the entrance of a hive. The proposed system can be used as a part of a more complex system for tracking and counting of honey bees with remote pollination monitoring as a final goal. The proposed method is executed in real time on embedded systems co-located with a hive. Background subtraction, color segmentation and morphology methods are used for segmentation of honey bees. Classification in two classes, pollen bearing honey bees and honey bees that do not have pollen load, is performed using nearest mean classifier, with a simple descriptor consisting of color variance and eccentricity features. On in-house data set we achieved correct classification rate of 88.7% with 50 training images per class. We show that the obtained classification results are not far behind from the results of state-of-the-art image classification methods. That favors the proposed method, particularly having in mind that real time video transmission to remote high performance computing workstation is still an issue, and transfer of obtained parameters of pollination process is much easier.
Using Digital Time-Lapse Videos to Teach Geomorphic Processes to Undergraduates
NASA Astrophysics Data System (ADS)
Clark, D. H.; Linneman, S. R.; Fuller, J.
2004-12-01
We demonstrate the use of relatively low-cost, computer-based digital imagery to create time-lapse videos of two distinct geomorphic processes in order to help students grasp the significance of the rates, styles, and temporal dependence of geologic phenomena. Student interviews indicate that such videos help them to understand the relationship between processes and landform development. Time-lapse videos have been used extensively in some sciences (e.g., biology - http://sbcf.iu.edu/goodpract/hangarter.html, meteorology - http://www.apple.com/education/hed/aua0101s/meteor/, chemistry - http://www.chem.yorku.ca/profs/hempsted/chemed/home.html) to demonstrate gradual processes that are difficult for many students to visualize. Most geologic processes are slower still, and are consequently even more difficult for students to grasp, yet time-lapse videos are rarely used in earth science classrooms. The advent of inexpensive web-cams and computers provides a new means to explore the temporal dimension of earth surface processes. To test the use of time-lapse videos in geoscience education, we are developing time-lapse movies that record the evolution of two landforms: a stream-table delta and a large, natural, active landslide. The former involves well-known processes in a controlled, repeatable laboratory experiment, whereas the latter tracks the developing dynamics of an otherwise poorly understood slope failure. The stream-table delta is small and grows in ca. 2 days; we capture a frame on an overhead web-cam every 3 minutes. Before seeing the video, students are asked to hypothesize how the delta will grow through time. The final time-lapse video, ca. 20-80 MB, elegantly shows channel migration, progradation rates, and formation of major geomorphic elements (topset, foreset, bottomset beds). The web-cam can also be "zoomed-in" to show smaller-scale processes, such as bedload transfer, and foreset slumping. Post-lab tests and interviews with students indicate that these time-lapse videos significantly improve student interest in the material, and comprehension of the processes. In contrast, the natural landslide is relatively unconstrained, and its processes of movement, both gradual and catastrophic, are essentially impossible to observe directly without the aid of time-lapse imagery. We are constructing a remote digital camera, mounted in a tree, which will capture 1-2 photos/day of the toe. The toe is extremely active geomorphically, and the time-lapse movie should help us (and the students) to constrain the style, frequency, and rates of movement, surface slumping, and debris-flow generation. Because we have also installed a remote weather station on the landslide, we will be able to test the links between these processes and local climate conditions.
Basics of robotics and manipulators in endoscopic surgery.
Rininsland, H H
1993-06-01
The experience with sophisticated remote handling systems for nuclear operations in inaccessible rooms can to a large extent be transferred to the development of robotics and telemanipulators for endoscopic surgery. A telemanipulator system is described consisting of manipulator, endeffector and tools, 3-D video-endoscope, sensors, intelligent control system, modeling and graphic simulation and man-machine interfaces as the main components or subsystems. Such a telemanipulator seems to be medically worthwhile and technically feasible, but needs a lot of effort from different scientific disciplines to become a safe and reliable instrument for future endoscopic surgery.
NIF ICCS network design and loading analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tietbohl, G; Bryant, R
The National Ignition Facility (NIF) is housed within a large facility about the size of two football fields. The Integrated Computer Control System (ICCS) is distributed throughout this facility and requires the integration of about 40,000 control points and over 500 video sources. This integration is provided by approximately 700 control computers distributed throughout the NIF facility and a network that provides the communication infrastructure. A main control room houses a set of seven computer consoles providing operator access and control of the various distributed front-end processors (FEPs). There are also remote workstations distributed within the facility that allow providemore » operator console functions while personnel are testing and troubleshooting throughout the facility. The operator workstations communicate with the FEPs which implement the localized control and monitoring functions. There are different types of FEPs for the various subsystems being controlled. This report describes the design of the NIF ICCS network and how it meets the traffic loads that will are expected and the requirements of the Sub-System Design Requirements (SSDR's). This document supersedes the earlier reports entitled Analysis of the National Ignition Facility Network, dated November 6, 1996 and The National Ignition Facility Digital Video and Control Network, dated July 9, 1996. For an overview of the ICCS, refer to the document NIF Integrated Computer Controls System Description (NIF-3738).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagen Schempf; Daphne D'Zurko
Under funding from the Department of Energy (DOE) and the Northeast Gas Association (NGA), Carnegie Mellon University (CMU) developed an untethered, wireless remote controlled inspection robot dubbed Explorer. The project entailed the design and prototyping of a wireless self-powered video-inspection robot capable of accessing live 6- and 8-inch diameter cast-iron and steel mains, while traversing turns and Ts and elbows under real-time control with live video feedback to an operator. The design is that of a segmented actively articulated and wheel-leg powered robot design, with fisheye imaging capability and self-powered battery storage and wireless real-time communication link. The prototype wasmore » functionally tested in an above ground pipe-network, in order to debug all mechanical, electrical and software subsystems, and develop the necessary deployment and retrieval, as well as obstacle-handling scripts. A pressurized natural gas test-section was used to certify it for operation in natural gas at up to 60 psig. Two subsequent live-main field-trials in both cast-iron and steel pipe, demonstrated its ability to be safely launched, operated and retrieved under real-world conditions. The system's ability to safely and repeatably exidrecover from angled and vertical launchers, traverse multi-thousand foot long pipe-sections, make T and varied-angle elbow-turns while wirelessly sending live video and handling command and control messages, was clearly demonstrated. Video-inspection was clearly shown to be a viable tool to understand the state of this critical buried infrastructure, irrespective of low- (cast-iron) or high-pressure (steel) conditions. This report covers the different aspects of specifications, requirements, design, prototyping, integration and testing and field-trialing of the Explorer platform.« less
NASA Astrophysics Data System (ADS)
Hashimoto, Sayuri; Munakata, Tsunestugu; Hashimoto, Nobuyuki; Okunaka, Jyunzo; Koga, Tatsuzo
2006-01-01
Our research showed that a high degree of life-stress has a negative mental health effect that may interrupt regular exercise. We used an internet based, remotely conducted, face to face, preventive counseling program using video monitors to reduce the source of life-stresses that interrupts regular exercise and evaluated the preventative effects of the program in elderly people. NTSC Video signals were converted to the IP protocol and facial images were transmitted to a PC display using the exclusive optical network lines of JGN2. Participants were 22 elderly people in Hokkaido, Japan, who regularly played table tennis. A survey was conducted before the intervention in August 2003. IT remote counseling was conducted on two occasions for one hour on each occasion. A post intervention survey was conducted in February 2004 and a follow-up survey was conducted in March 2005. Network quality was satisfactory with little data loss and high display quality. Results indicated that self-esteem increased significantly, trait anxiety decreased significantly, cognition of emotional support by people other than family members had a tendency to increase, and source of stress had a tendency to decrease after the intervention. Follow-up results indicated that cognition of emotional support by family increased significantly, and interpersonal dependency decreased significantly compared to before the intervention. These results suggest that face to face IT remote counseling using video monitors is useful to keep elderly people from feeling anxious and to make them confident to continue exercising regularly. Moreover, it has a stress management effect.
Handschu, René; Littmann, Rebekka; Reulbach, Udo; Gaul, Charly; Heckmann, Josef G; Neundörfer, Bernhard; Scibor, Mateusz
2003-12-01
In acute stroke care, rapid but careful evaluation of patients is mandatory but requires an experienced stroke neurologist. Telemedicine offers the possibility of bringing such expertise quickly to more patients. This study tested for the first time whether remote video examination is feasible and reliable when applied in emergency stroke care using the National Institutes of Health Stroke Scale (NIHSS). We used a novel multimedia telesupport system for transfer of real-time video sequences and audio data. The remote examiner could direct the set-top camera and zoom from distant overviews to close-ups from the personal computer in his office. Acute stroke patients admitted to our stroke unit were examined on admission in the emergency room. Standardized examination was performed by use of the NIHSS (German version) via telemedicine and compared with bedside application. In this pilot study, 41 patients were examined. Total examination time was 11.4 minutes on average (range, 8 to 18 minutes). None of the examinations had to be stopped or interrupted for technical reasons, although minor problems (brightness, audio quality) with influence on the examination process occurred in 2 sessions. Unweighted kappa coefficients ranged from 0.44 to 0.89; weighted kappa coefficients, from 0.85 to 0.99. Remote examination of acute stroke patients with a computer-based telesupport system is feasible and reliable when applied in the emergency room; interrater agreement was good to excellent in all items. For more widespread use, some problems that emerge from details like brightness, optimal camera position, and audio quality should be solved.
Simple video format for mobile applications
NASA Astrophysics Data System (ADS)
Smith, John R.; Miao, Zhourong; Li, Chung-Sheng
2000-04-01
With the advent of pervasive computing, there is a growing demand for enabling multimedia applications on mobile devices. Large numbers of pervasive computing devices, such as personal digital assistants (PDAs), hand-held computer (HHC), smart phones, portable audio players, automotive computing devices, and wearable computers are gaining access to online information sources. However, the pervasive computing devices are often constrained along a number of dimensions, such as processing power, local storage, display size and depth, connectivity, and communication bandwidth, which makes it difficult to access rich image and video content. In this paper, we report on our initial efforts in designing a simple scalable video format with low-decoding and transcoding complexity for pervasive computing. The goal is to enable image and video access for mobile applications such as electronic catalog shopping, video conferencing, remote surveillance and video mail using pervasive computing devices.
Multiple-Agent Air/Ground Autonomous Exploration Systems
NASA Technical Reports Server (NTRS)
Fink, Wolfgang; Chao, Tien-Hsin; Tarbell, Mark; Dohm, James M.
2007-01-01
Autonomous systems of multiple-agent air/ground robotic units for exploration of the surfaces of remote planets are undergoing development. Modified versions of these systems could be used on Earth to perform tasks in environments dangerous or inaccessible to humans: examples of tasks could include scientific exploration of remote regions of Antarctica, removal of land mines, cleanup of hazardous chemicals, and military reconnaissance. A basic system according to this concept (see figure) would include a unit, suspended by a balloon or a blimp, that would be in radio communication with multiple robotic ground vehicles (rovers) equipped with video cameras and possibly other sensors for scientific exploration. The airborne unit would be free-floating, controlled by thrusters, or tethered either to one of the rovers or to a stationary object in or on the ground. Each rover would contain a semi-autonomous control system for maneuvering and would function under the supervision of a control system in the airborne unit. The rover maneuvering control system would utilize imagery from the onboard camera to navigate around obstacles. Avoidance of obstacles would also be aided by readout from an onboard (e.g., ultrasonic) sensor. Together, the rover and airborne control systems would constitute an overarching closed-loop control system to coordinate scientific exploration by the rovers.
Teleradiology Via The Naval Remote Medical Diagnosis System (RMDS)
NASA Astrophysics Data System (ADS)
Rasmussen, Will; Stevens, Ilya; Gerber, F. H.; Kuhlman, Jayne A.
1982-01-01
Testing was conducted to obtain qualitative and quantitative (statistical) data on radiology performance using the Remote Medical Diagnosis System (RMDS) Advanced Development Models (ADMs)1. Based upon data collected during testing with professional radiologists, this analysis addresses the clinical utility of radiographic images transferred through six possible RMDS transmission modes. These radiographs were also viewed under closed-circuit television (CCTV) and lightbox conditions to provide a basis for comparison. The analysis indicates that the RMDS ADM terminals (with a system video resolution of 525 x 256 x 6) would provide satisfactory radiographic images for radiology consultations in emergency cases with gross pathological disorders. However, in cases involving more subtle findings, a system video resolution of 525 x 512 x 8 would be preferable.
Plugin free remote visualization in the browser
NASA Astrophysics Data System (ADS)
Tamm, Georg; Slusallek, Philipp
2015-01-01
Today, users access information and rich media from anywhere using the web browser on their desktop computers, tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like visualization or gaming become feasible as browsers advance in the functionality they provide. However, to deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary. Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies. The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the Native Client (NaCl) technology built into Chrome, to deliver video with low latency.
Automated Video Quality Assessment for Deep-Sea Video
NASA Astrophysics Data System (ADS)
Pirenne, B.; Hoeberechts, M.; Kalmbach, A.; Sadhu, T.; Branzan Albu, A.; Glotin, H.; Jeffries, M. A.; Bui, A. O. V.
2015-12-01
Video provides a rich source of data for geophysical analysis, often supplying detailed information about the environment when other instruments may not. This is especially true of deep-sea environments, where direct visual observations cannot be made. As computer vision techniques improve and volumes of video data increase, automated video analysis is emerging as a practical alternative to labor-intensive manual analysis. Automated techniques can be much more sensitive to video quality than their manual counterparts, so performing quality assessment before doing full analysis is critical to producing valid results.Ocean Networks Canada (ONC), an initiative of the University of Victoria, operates cabled ocean observatories that supply continuous power and Internet connectivity to a broad suite of subsea instruments from the coast to the deep sea, including video and still cameras. This network of ocean observatories has produced almost 20,000 hours of video (about 38 hours are recorded each day) and an additional 8,000 hours of logs from remotely operated vehicle (ROV) dives. We begin by surveying some ways in which deep-sea video poses challenges for automated analysis, including: 1. Non-uniform lighting: Single, directional, light sources produce uneven luminance distributions and shadows; remotely operated lighting equipment are also susceptible to technical failures. 2. Particulate noise: Turbidity and marine snow are often present in underwater video; particles in the water column can have sharper focus and higher contrast than the objects of interest due to their proximity to the light source and can also influence the camera's autofocus and auto white-balance routines. 3. Color distortion (low contrast): The rate of absorption of light in water varies by wavelength, and is higher overall than in air, altering apparent colors and lowering the contrast of objects at a distance.We also describe measures under development at ONC for detecting and mitigating these effects. These steps include filtering out unusable data, color and luminance balancing, and choosing the most appropriate image descriptors. We apply these techniques to generate automated quality assessment of video data and illustrate their utility with an example application where we perform vision-based substrate classification.
ERIC Educational Resources Information Center
Koeber, Charles; Wright, David W.
2008-01-01
This study uses a quasi-experiment to evaluate the effectiveness of Internet videoconferencing technology. The instructor used a laptop, webcam, high-speed DSL connection, and Polycom[TM] Viewstation to teach a course unit of introductory sociology from a remote location to an experimental group of students in a large multimedia classroom. The…
2010-03-01
piece of tissue. Full Mobility Manipulator Robot The primary challenge with the design of a full mobility robot is meeting the competing design...streamed through an embedded plug-in for VLC player using asf/wmv encoding with 200ms buffering. A benchtop test of the remote user interface was...encountered in ensuring quality video is being made available to the surgeon. A significant challenge has been to consistently provide high quality video
ERIC Educational Resources Information Center
Abdous, M'hammed; Yoshimura, Miki
2010-01-01
This study examined the final grade and satisfaction level differences among students taking specific courses using three different methods: face-to-face in class, via satellite broadcasting at remote sites, and via live video-streaming at home or at work. In each case, the same course was taught by the same instructor in all three delivery…
Design of a monitor and simulation terminal (master) for space station telerobotics and telescience
NASA Technical Reports Server (NTRS)
Lopez, L.; Konkel, C.; Harmon, P.; King, S.
1989-01-01
Based on Space Station and planetary spacecraft communication time delays and bandwidth limitations, it will be necessary to develop an intelligent, general purpose ground monitor terminal capable of sophisticated data display and control of on-orbit facilities and remote spacecraft. The basic elements that make up a Monitor and Simulation Terminal (MASTER) include computer overlay video, data compression, forward simulation, mission resource optimization and high level robotic control. Hardware and software elements of a MASTER are being assembled for testbed use. Applications of Neural Networks (NNs) to some key functions of a MASTER are also discussed. These functions are overlay graphics adjustment, object correlation and kinematic-dynamic characterization of the manipulator.
Teleneurosonology: a novel application of transcranial and carotid ultrasound.
Rubin, Mark N; Barrett, Kevin M; Freeman, W David; Lee Iannotti, Joyce K; Channer, Dwight D; Rabinstein, Alejandro A; Demaerschalk, Bart M
2015-03-01
To demonstrate the technical feasibility of interfacing transcranial Doppler (TCD) and carotid "duplex" ultrasonography (CUS) peripherals with telemedicine end points to provide real-time spectral waveform and duplex imaging data for remote review and interpretation. We performed remote TCD and CUS examinations on a healthy, volunteer employee from our institution without known cerebrovascular disease. The telemedicine end point was stationed in our institution's hospital where the neurosonology examinations took place and the control station was in a dedicated telemedicine room in a separate building. The examinations were performed by a postgraduate level neurohospitalist trainee (M.N.R.) and interpreted by an attending vascular neurologist, both with experience in the performance and interpretation of TCD and CUS. Spectral waveform and duplex ultrasound data were successfully transmitted from TCD and CUS instruments through a telemedicine end point to a remote reviewer at a control station. Image quality was preserved in all cases, and technical failures were not encountered. This proof-of-concept study demonstrates the technical feasibility of interfacing TCD and CUS peripherals with a telemedicine end point to provide real-time spectral waveform and duplex imaging data for remote review and interpretation. Medical diagnostic and telemedicine devices should be equipped with interfaces that allow simple transmission of high-quality audio and video information from the medical devices to the telemedicine technology. Further study is encouraged to determine the clinical impact of teleneurosonology. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Video Guidance Sensor and Time-of-Flight Rangefinder
NASA Technical Reports Server (NTRS)
Bryan, Thomas; Howard, Richard; Bell, Joseph L.; Roe, Fred D.; Book, Michael L.
2007-01-01
A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target. As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target. The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images. The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode.
Kampik, Timotheus; Larsen, Frank; Bellika, Johan Gustav
2015-01-01
The objective of the study was to identify experiences and attitudes of German and Norwegian general practitioners (GPs) towards Internet-based remote consultation solutions supporting communication between GPs and patients in the context of the German and Norwegian healthcare systems. Interviews with four German and five Norwegian GPs were conducted. The results were qualitatively analyzed. All interviewed GPs stated they would like to make use of Internet-based remote consultations in the future. Current experiences with remote consultations are existent to a limited degree. No GP reported to use a comprehensive remote consultation solution. The main features GPs would like to see in a remote consultation solution include asynchronous exchange of text messages, video conferencing with text chat, scheduling of remote consultation appointments, secure login and data transfer and the integration of the remote consultation solution into the GP's EHR system.
Remote Sensing Characteristics of Wave Breaking Rollers
NASA Astrophysics Data System (ADS)
Haller, M. C.; Catalan, P.
2006-12-01
The wave roller has a primary influence on the balances of mass and momentum in the surf zone (e.g. Svendsen, 1984; Dally and Brown, 1995; Ruessink et al., 2001). In addition, the roller area and its angle of inclination on the wave front are important quantities governing the dissipation rates in breaking waves (e.g Madsen et al., 1997). Yet, there have been very few measurements published of individual breaking wave roller geometries in shallow water. A number of investigators have focused on observations of the initial jet-like motion at the onset of breaking before the establishment of the wave roller (e.g. Basco, 1985; Jansen, 1986), while Govender et al. (2002) provide observations of wave roller vertical cross-sections and angles of inclination for a pair of laboratory wave conditions. Nonetheless, presently very little is known about the growth, evolution, and decay of this aerated region of white water as it propagates through the surf zone; mostly due to the inherent difficulties in making the relevant observations. The present work is focused on analyzing observations of the time and space scales of individual shallow water breaking wave rollers as derived from remote sensing systems. Using a high-resolution video system in a large-scale laboratory facility, we have obtained detailed measurements of the growth and evolution of the wave breaking roller. In addition, by synchronizing the remote video with in-situ wave gages, we are able to directly relate the video intensity signal to the underlying wave shape. Results indicate that the horizontal length scale of breaking wave rollers differs significantly from the previous observations of Duncan (1981), which has been a traditional basis for roller model parameterizations. The overall approach to the video analysis is new in the sense that we concentrate on individual breaking waves, as opposed to the more commonly used time-exposure technique. In addition, a new parameter of interest, denoted Imax, is introduced based on the envelope of the intensity signal. The parameter is shown to be much less sensitive to trailing wave breaking foam, which typically corrupts time-exposure data. In the present work this parameter is shown to provide high-resolution information regarding the onset of wave breaking and the spatial evolution of the wave roller. Ongoing work will attempt to relate the shoreward transformation of the intensity maximum and the geometric characteristics of the wave roller to the spatial distribution of wave breaking dissipation. Finally, we will compare wave breaking characteristics as imaged by two separate remote sensors. Synoptic images from both video and microwave radar remote sensors were obtained in September of 2005 at Duck, NC. This combination of the two observing systems will allow direct quantitative comparisons between the two imaging mechanisms and lead to a better understanding of the strengths and weaknesses of both for nearshore research and observational remote sensing.
Abrahamsen, Håkon B
2015-06-10
Major incidents are complex, dynamic and bewildering task environments characterised by simultaneous, rapidly changing events, uncertainty and ill-structured problems. Efficient management, communication, decision-making and allocation of scarce medical resources at the chaotic scene of a major incident is challenging and often relies on sparse information and data. Communication and information sharing is primarily voice-to-voice through phone or radio on specified radio frequencies. Visual cues are abundant and difficult to communicate between teams and team members that are not co-located. The aim was to assess the concept and feasibility of using a remotely piloted aircraft (RPA) system to support remote sensing in simulated major incident exercises. We carried out an experimental, pilot feasibility study. A custom-made, remotely controlled, multirotor unmanned aerial vehicle with vertical take-off and landing was equipped with digital colour- and thermal imaging cameras, a laser beam, a mechanical gripper arm and an avalanche transceiver. We collected data in five simulated exercises: 1) mass casualty traffic accident, 2) mountain rescue, 3) avalanche with buried victims, 4) fisherman through thin ice and 5) search for casualties in the dark. The unmanned aerial vehicle was remotely controlled, with high precision, in close proximity to air space obstacles at very low levels without compromising work on the ground. Payload capacity and tolerance to wind and turbulence were limited. Aerial video, shot from different altitudes, and remote aerial avalanche beacon search were streamed wirelessly in real time to a monitor at a ground base. Electromagnetic interference disturbed signal reception in the ground monitor. A small remotely piloted aircraft can be used as an effective tool carrier, although limited by its payload capacity, wind speed and flight endurance. Remote sensing using already existing remotely piloted aircraft technology in pre-hospital environments is feasible and can be used to support situation assessment and information exchange at a major incident scene. Regulations are needed to ensure the safe use of unmanned aerial vehicles in major incidents. Ethical issues are abundant.
A 24-hour remote surveillance system for terrestrial wildlife studies
Sykes, P.W.; Ryman, W.E.; Kepler, C.B.; Hardy, J.W.
1995-01-01
The configuration, components, specifications and costs of a state-of-the-art closed-circuit television system with wide application for wildlife research and management are described. The principal system components consist of color CCTV camera with zoom lens, pan/tilt system, infrared illuminator, heavy duty tripod, coaxial cable, coaxitron system, half-duplex equalizing video/control amplifier, timelapse video cassette recorder, color video monitor, VHS video cassettes, portable generator, fuel tank and power cable. This system was developed and used in a study of Mississippi sandhiIl Crane (Grus canadensis pratensis) behaviors during incubation, hatching and fledging. The main advantages of the system are minimal downtime where a complete record of every event, its time of occurrence and duration, are permanently recorded and can be replayed as many times as necessary thereafter to retrieve the data. The system is particularly applicable for studies of behavior and predation, for counting individuals, or recording difficult to observe activities. The system can be run continuously for several weeks by two people, reducing personnel costs. This paper is intended to provide biologists who have litte knowledge of electronics with a system that might be useful to their specific needs. The disadvantages of this system are the initial costs (about $9800 basic, 1990-1991 U.S. dollars) and the time required to playback video cassette tapes for data retrieval, but the playback can be sped up when litte or no activity of interest is taking place. In our study, the positive aspects of the system far outweighed the negative.
Remote autopsy services: A feasibility study on nine cases.
Vodovnik, Aleksandar; Aghdam, Mohammad Reza F; Espedal, Dan Gøran
2017-01-01
Introduction We have conducted a feasibility study on remote autopsy services in order to increase the flexibility of the service with benefits for teaching and interdepartmental collaboration. Methods Three senior staff pathologists, one senior autopsy technician and one junior resident participated in the study. Nine autopsies were performed by the autopsy technician or resident, supervised by the primary pathologist, through the secure, double encrypted video link using Jabber Video (Cisco) with a high-speed broadband connection. The primary pathologist and autopsy room each connected to the secure virtual meeting room using 14″ laptops with in-built cameras (Hewlett-Packard). A portable high-definition web camera (Cisco) was used in the autopsy room. Primary and secondary pathologists independently interpreted and later compared gross findings for the purpose of quality assurance. The video was streamed live only during consultations and interpretation. A satisfaction survey on technical and professional aspects of the study was conducted. Results Independent interpretations of gross findings between primary and secondary pathologists yielded full agreement. A definite cause of death in one complex autopsy was determined following discussions between pathologists and reviews of the clinical notes. Our satisfaction level with the technical and professional aspects of the study was 87% and 97%, respectively. Discussion Remote autopsy services are found to be feasible in the hands of experienced staff, with increased flexibility and interest of autopsy technicians in the service as a result.
High altitude aircraft remote sensing during the 1988 Yellowstone National Park wildfires
NASA Technical Reports Server (NTRS)
Ambrosia, Vincent G.
1990-01-01
An overview is presented of the effects of the wildfires that occurred in the Yellowstone National Park during 1988 and the techniques employed to combat these fires with the use of remote sensing. The fire management team utilized King-Air and Merlin aircraft flying night missions with a thermal IR line-scanning system. NASA-Ames Research Center assisted with an ER-2 high altitude aircraft with the ability to down-link active data from the aircraft via a teledetection system. The ER-2 was equipped with a multispectral Thematic Mapper Simulator scanner and the resultant map data and video imagery was provided to the fire command personnel for field evaluation and fire suppression activities. This type of information proved very valuable to the fire control management personnel and to the continuing ecological research goals of NASA-Ames scientists analyzing the effects of burn type and severity on ecosystem recovery and development.
Large-Scale Cryogen Systems and Test Facilities
NASA Technical Reports Server (NTRS)
Johnson, R. G.; Sass, J. P.; Hatfield, W. H.
2007-01-01
NASA has completed initial construction and verification testing of the Integrated Systems Test Facility (ISTF) Cryogenic Testbed. The ISTF is located at Complex 20 at Cape Canaveral Air Force Station, Florida. The remote and secure location is ideally suited for the following functions: (1) development testing of advanced cryogenic component technologies, (2) development testing of concepts and processes for entire ground support systems designed for servicing large launch vehicles, and (3) commercial sector testing of cryogenic- and energy-related products and systems. The ISTF Cryogenic Testbed consists of modular fluid distribution piping and storage tanks for liquid oxygen/nitrogen (56,000 gal) and liquid hydrogen (66,000 gal). Storage tanks for liquid methane (41,000 gal) and Rocket Propellant 1 (37,000 gal) are also specified for the facility. A state-of-the-art blast proof test command and control center provides capability for remote operation, video surveillance, and data recording for all test areas.
Development of systems and techniques for landing an aircraft using onboard television
NASA Technical Reports Server (NTRS)
Gee, S. W.; Carr, P. C.; Winter, W. R.; Manke, J. A.
1978-01-01
A flight program was conducted to develop a landing technique with which a pilot could consistently and safely land a remotely piloted research vehicle (RPRV) without outside visual reference except through television. Otherwise, instrumentation was standard. Such factors as the selection of video parameters, the pilot's understanding of the television presentation, the pilot's ground cockpit environment, and the operational procedures for landing were considered. About 30 landings were necessary for a pilot to become sufficiently familiar and competent with the test aircraft to make powered approaches and landings with outside visual references only through television. When steep approaches and landings were made by remote control, the pilot's workload was extremely high. The test aircraft was used as a simulator for the F-15 RPRV, and as such was considered to be essential to the success of landing the F-15 RPRV.
[Multimedia (visual collaboration) brings true nature of human life].
Tomita, N
2000-03-01
Videoconferencing system, high-quality visual collaboration, is bringing Multimedia into a society. Multimedia, high quality media such as TV broadcast, looks expensive because it requires broadband network with 100-200 Mpbs bandwidth or 3,700 analog telephone lines. However, thanks to the existing digital-line called N-ISDN (Narrow Integrated Service Digital Network) and PictureTel's audio/video compression technologies, it becomes far less expensive. N-ISDN provides 128 Kbps bandwidth, over twice wider than analog line. PictureTel's technology instantly compress audio/video signal into 1/1,000 in size. This means, with ISDN and PictureTel technology. Multimedia is materialized over even single ISDN line. This will allow doctor to remotely meet face-to-face with a medical specialist or patients to interview, conduct physical examinations, review records, and prescribe treatments. Bonding multiple ISDN lines will further improve video quality that enables remote surgery. Surgeon can perform an operation on internal organ by projecting motion video from Endoscope's CCD camera to large display monitor. Also, PictureTel provides advanced technologies of eliminating background noise generated by surgical knives or scalpels during surgery. This will allow sound of the breath or heartbeat be clearly transmitted to the remote site. Thus, Multimedia eliminates the barrier of distance, enabling people to be just at home, to be anywhere in the world, to undergo up-to-date medical treatment by expertise. This will reduce medical cost and allow people to live in the suburbs, in less pollution, closer to the nature. People will foster more open and collaborative environment by participating in local activities. Such community-oriented life-style will atone for mass consumption, materialistic economy in the past, then bring true happiness and welfare into our life after all.
Real-time Internet connections: implications for surgical decision making in laparoscopy.
Broderick, T J; Harnett, B M; Doarn, C R; Rodas, E B; Merrell, R C
2001-08-01
To determine whether a low-bandwidth Internet connection can provide adequate image quality to support remote real-time surgical consultation. Telemedicine has been used to support care at a distance through the use of expensive equipment and broadband communication links. In the past, the operating room has been an isolated environment that has been relatively inaccessible for real-time consultation. Recent technological advances have permitted videoconferencing over low-bandwidth, inexpensive Internet connections. If these connections are shown to provide adequate video quality for surgical applications, low-bandwidth telemedicine will open the operating room environment to remote real-time surgical consultation. Surgeons performing a laparoscopic cholecystectomy in Ecuador or the Dominican Republic shared real-time laparoscopic images with a panel of surgeons at the parent university through a dial-up Internet account. The connection permitted video and audio teleconferencing to support real-time consultation as well as the transmission of real-time images and store-and-forward images for observation by the consultant panel. A total of six live consultations were analyzed. In addition, paired local and remote images were "grabbed" from the video feed during these laparoscopic cholecystectomies. Nine of these paired images were then placed into a Web-based tool designed to evaluate the effect of transmission on image quality. The authors showed for the first time the ability to identify critical anatomic structures in laparoscopy over a low-bandwidth connection via the Internet. The consultant panel of surgeons correctly remotely identified biliary and arterial anatomy during six laparoscopic cholecystectomies. Within the Web-based questionnaire, 15 surgeons could not blindly distinguish the quality of local and remote laparoscopic images. Low-bandwidth, Internet-based telemedicine is inexpensive, effective, and almost ubiquitous. Use of these inexpensive, portable technologies will allow sharing of surgical procedures and decisions regardless of location. Internet telemedicine consistently supported real-time intraoperative consultation in laparoscopic surgery. The implications are broad with respect to quality improvement and diffusion of knowledge as well as for basic consultation.
NASA Technical Reports Server (NTRS)
Delombard, R.
1984-01-01
A photovoltaic power system which will be installed at a remote location in Indonesia to provide power for a satellite Earth station and a classroom for video and audio teleconferences are described. The Earth station may also provide telephone service to a nearby village. The use of satellite communications for development assistance applications and the suitability of a hybrid photovoltaic engine generator power system for remote satellite Earth stations are demonstrated. The Indonesian rural satellite project is discussed and the photovoltaic power system is described.
Peterson, Courtney M.; Apolzan, John W.; Wright, Courtney; Martin, Corby K.
2017-01-01
We conducted a pair of studies to test the validity, reliability, feasibility, and acceptability of using video chat technology as a novel method to quantify dietary and pill-taking (i.e., supplement and medication) adherence. In the first study, we investigated whether video chat technology can accurately quantify adherence to dietary and pill-taking interventions. Mock study participants ate food items and swallowed pills while performing randomized scripted “cheating” behaviors design to mimic non-adherence. Monitoring was conducted in a crossover design, with two monitors watching in-person and two watching remotely by Skype on a smartphone. For the second study, a 22-question online survey was sent to an email listserv with more than 20,000 unique email addresses of past and present study participants to assess the feasibility and acceptability of the technology. For the dietary adherence tests, monitors detected 86% of non-adherent events (sensitivity) in-person versus 78% of events via video chat monitoring (p=0.12), with comparable inter-rater agreement (0.88 vs. 0.85; p=0.62). However, for pill-taking, non-adherence trended towards being more easily detected in-person than by video chat (77% vs. 60%; p=0.08), with non-significantly higher inter-rater agreement (0.85 vs. 0.69; p=0.21). Survey results from the second study (N=1,076 respondents; at least a 5% response rate) indicated that 86.4% of study participants had video chatting hardware, 73.3% were comfortable using the technology; and 79.8% were willing to use it for clinical research. Given the capability of video chat technology to reduce participant burden and to outperform other adherence monitoring methods such as dietary self-report and pill counts, video chatting is a novel and highly promising platform to quantify dietary and pill-taking adherence. PMID:27753427
Feasibility of telementoring between Baltimore (USA) and Rome (Italy): the first five cases.
Micali, S; Virgili, G; Vannozzi, E; Grassi, N; Jarrett, T W; Bauer, J J; Vespasiani, G; Kavoussi, L R
2000-08-01
Telemedicine is the use of telecommunication technology to deliver healthcare. Telementoring has been developed to allow a surgeon at a remote site to offer guidance and assistance to a less-experienced surgeon. We report on our experience during laparoscopic urologic procedures with mentoring between Rome, Italy, and Baltimore, USA. Over a period of 3 months, two laparoscopic left spermatic vein ligations, one retroperitoneal renal biopsy, one laparoscopic nephrectomy, and one percutaneous access to the kidney were telementored. Transperitoneal laparoscopic cases were performed with the use of AESOP, a robotic for remote manipulation of the endoscopic camera. A second robot, PAKY, was used to perform radiologically guided needle orientation and insertion for percutaneous renal access. In addition to controlling the robotic devices, the system provided real-time video display for either the laparoscope or an externally mounted camera located in the operating room, full duplex audio, telestration over live video, and access to electrocautery for tissue cutting or hemostasis. All procedures were accomplished with an uneventful postoperative course. One technical failure occurred because the robotic device was not properly positioned on the operating table. The round-trip delay of image transmission was less than 1 second. International telementoring is a feasible technique that can enhance surgeon education and decrease the likelihood of complications attributable to inexperience with new operative techniques.
Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R
2018-05-01
Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p < 0.001) and identified a lower proportion of chimpanzees (GLMM post hoc test: est. = -2.17914, SE = 0.08490, Z = -25.666, p < 0.001) compared to in-person observations. However, the observer could view the 2 ha enclosure 15 times faster by camera compared to in person. In addition to these results, we provide recommendations to animal facilities considering the installation of a video camera system. Despite some limitations of remote monitoring, we posit that there are substantial benefits of using camera systems in sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.
Andradi-Brown, Dominic A; Macaya-Solis, Consuelo; Exton, Dan A; Gress, Erika; Wright, Georgina; Rogers, Alex D
2016-01-01
Fish surveys form the backbone of reef monitoring and management initiatives throughout the tropics, and understanding patterns in biases between techniques is crucial if outputs are to address key objectives optimally. Often biases are not consistent across natural environmental gradients such as depth, leading to uncertainty in interpretation of results. Recently there has been much interest in mesophotic reefs (reefs from 30-150 m depth) as refuge habitats from fishing pressure, leading to many comparisons of reef fish communities over depth gradients. Here we compare fish communities using stereo-video footage recorded via baited remote underwater video (BRUV) and diver-operated video (DOV) systems on shallow and mesophotic reefs in the Mesoamerican Barrier Reef, Caribbean. We show inconsistent responses across families, species and trophic groups between methods across the depth gradient. Fish species and family richness were higher using BRUV at both depth ranges, suggesting that BRUV is more appropriate for recording all components of the fish community. Fish length distributions were not different between methods on shallow reefs, yet BRUV recorded more small fish on mesophotic reefs. However, DOV consistently recorded greater relative fish community biomass of herbivores, suggesting that studies focusing on herbivores should consider using DOV. Our results highlight the importance of considering what component of reef fish community researchers and managers are most interested in surveying when deciding which survey technique to use across natural gradients such as depth.
Macaya-Solis, Consuelo; Exton, Dan A.; Gress, Erika; Wright, Georgina; Rogers, Alex D.
2016-01-01
Fish surveys form the backbone of reef monitoring and management initiatives throughout the tropics, and understanding patterns in biases between techniques is crucial if outputs are to address key objectives optimally. Often biases are not consistent across natural environmental gradients such as depth, leading to uncertainty in interpretation of results. Recently there has been much interest in mesophotic reefs (reefs from 30–150 m depth) as refuge habitats from fishing pressure, leading to many comparisons of reef fish communities over depth gradients. Here we compare fish communities using stereo-video footage recorded via baited remote underwater video (BRUV) and diver-operated video (DOV) systems on shallow and mesophotic reefs in the Mesoamerican Barrier Reef, Caribbean. We show inconsistent responses across families, species and trophic groups between methods across the depth gradient. Fish species and family richness were higher using BRUV at both depth ranges, suggesting that BRUV is more appropriate for recording all components of the fish community. Fish length distributions were not different between methods on shallow reefs, yet BRUV recorded more small fish on mesophotic reefs. However, DOV consistently recorded greater relative fish community biomass of herbivores, suggesting that studies focusing on herbivores should consider using DOV. Our results highlight the importance of considering what component of reef fish community researchers and managers are most interested in surveying when deciding which survey technique to use across natural gradients such as depth. PMID:27959907
NASA Astrophysics Data System (ADS)
Carlowicz, Michael
After four decades of perfecting techniques for communication with spacecraft on the way to other worlds, space scientists are now working on new ways to reach students in this one. In a partnership between NASA and the University of North Dakota (UND), scientists and engineers from both institutions will soon lead an experiment in Internet learning.Starting January 22, UND will offer a threemonth computerized course in telerobotics. Using RealAudio and CU-SeeMe channels of the Internet to allow real-time transmission of video and audio, instructors will teach college-and graduate-level students the fundamentals of the remote operation and control of a robot.
The Integrated Radiation Mapper Assistant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlton, R.E.; Tripp, L.R.
1995-03-01
The Integrated Radiation Mapper Assistant (IRMA) system combines state-of-the-art radiation sensors and microprocessor based analysis techniques to perform radiation surveys. Control of the survey function is from a control station located outside the radiation thus reducing time spent in radiation areas performing radiation surveys. The system consists of a directional radiation sensor, a laser range finder, two area radiation sensors, and a video camera mounted on a pan and tilt platform. THis sensor package is deployable on a remotely operated vehicle. The outputs of the system are radiation intensity maps identifying both radiation source intensities and radiation levels throughout themore » room being surveyed. After completion of the survey, the data can be removed from the control station computer for further analysis or archiving.« less
Remote sensing and implications for variable-rate application using agricultural aircraft
NASA Astrophysics Data System (ADS)
Thomson, Steven J.; Smith, Lowrey A.; Ray, Jeffrey D.; Zimba, Paul V.
2004-01-01
Aircraft routinely used for agricultural spray application are finding utility for remote sensing. Data obtained from remote sensing can be used for prescription application of pesticides, fertilizers, cotton growth regulators, and water (the latter with the assistance of hyperspectral indices and thermal imaging). Digital video was used to detect weeds in early cotton, and preliminary data were obtained to see if nitrogen status could be detected in early soybeans. Weeds were differentiable from early cotton at very low altitudes (65-m), with the aid of supervised classification algorithms in the ENVI image analysis software. The camera was flown at very low altitude for acceptable pixel resolution. Nitrogen status was not detectable by statistical analysis of digital numbers (DNs) obtained from images, but soybean cultivar differences were statistically discernable (F=26, p=0.01). Spectroradiometer data are being analyzed to identify narrow spectral bands that might aid in selecting camera filters for determination of plant nitrogen status. Multiple camera configurations are proposed to allow vegetative indices to be developed more readily. Both remotely sensed field images and ground data are to be used for decision-making in a proposed variable-rate application system for agricultural aircraft. For this system, prescriptions generated from digital imagery and data will be coupled with GPS-based swath guidance and programmable flow control.
STS-114 Flight Day 3 Highlights
NASA Technical Reports Server (NTRS)
2005-01-01
Video coverage of Day 3 includes highlights of STS-114 during the approach and docking of Discovery with the International Space Station (ISS). The Return to Flight continues with space shuttle crew members (Commander Eileen Collins, Pilot James Kelly, Mission Specialists Soichi Noguchi, Stephen Robinson, Andrew Thomas, Wendy Lawrence, and Charles Camarda) seen in onboard activities on the fore and aft portions of the flight deck during the orbiter's approach. Camarda sends a greeting to his family, and Collins maneuvers Discovery as the ISS appears steadily closer in sequential still video from the centerline camera of the Orbiter Docking System. The approach includes video of Discovery from the ISS during the orbiter's Rendezvous Pitch Maneuver, giving the ISS a clear view of the thermal protection systems underneath the orbiter. Discovery docks with the Destiny Laboratory of the ISS, and the shuttle crew greets the Expedition 11 crew (Commander Sergei Krikalev and NASA ISS Science Officer and Flight Engineer John Phillips) of the ISS onboard the station. Finally, the Space Station Remote Manipulator System hands the Orbiter Boom Sensor System to its counterpart, the Shuttle Remote Manipulator System.
Dyer, Dianne; Cusden, Jane; Turner, Chris; Boyd, Jeff; Hall, Rob; Lautner, David; Hamilton, Douglas R; Shepherd, Lance; Dunham, Michael; Bigras, Andre; Bigras, Guy; McBeth, Paul; Kirkpatrick, Andrew W
2008-12-01
Ultrasound (US) has an ever increasing scope in the evaluation of trauma, but relies greatly on operator experience. NASA has refined telesongraphy (TS) protocols for traumatic injury, especially in reference to mentoring inexperienced users. We hypothesized that such TS might benefit remote terrestrial caregivers. We thus explored using real-time US and video communication between a remote (Banff) and central (Calgary) site during acute trauma resuscitations. A existing internet link, allowing bidirectional videoconferencing and unidirectional US transmission was used between the Banff and Calgary ERs. Protocols to direct or observe an extended focused assessment with sonography for trauma (EFAST) were adapted from NASA algorithms. A call rota was established. Technical feasibility was ascertained through review of completed checklists. Involved personnel were interviewed with a semistructured interview. In addition to three normal volunteers, 20 acute clinical examinations were completed. Technical challenges requiring solution included initiating US; audio and video communications; image freezing; and US transmission delays. FAST exams were completed in all cases and EFASTs in 14. The critical anatomic features of a diagnostic examination were identified in 98% of all FAST exams and a 100% of all EFASTs that were attempted. Enhancement of clinical care included confirmation of five cases of hemoperitoneum and two pneumothoraces (PTXs), as well as educational benefits. Remote personnel were appreciative of the remote direction particularly when instructions were given sequentially in simple, nontechnical language. The remote real-time guidance or observation of an EFAST using TS appears feasible. Most technical problems were quickly overcome. Further evaluation of this approach and technology is warranted in more remote settings with less experienced personnel.
A Role for YouTube in Telerehabilitation
Manasco, M. Hunter; Barone, Nicholas; Brown, Amanda
2010-01-01
YouTube (http://youtube.com) is a free video sharing website that allows users to post and view videos. Although there are definite limitations in the applicability of this website to telerehabilitation, the YouTube technology offers potential uses that should not be overlooked. For example, some types of therapy, such as errorless learning therapy for certain language and cognitive deficits can be provided remotely via YouTube. In addition, the website’s social networking capabilities, via the asynchronous posting of comments and videos in response to posted videos, enables individuals to gain valuable emotional support by communicating with others with similar health and rehabilitation challenges. This article addresses the benefits and limitations of YouTube in the context of telerehabilitation and reports patient feedback on errorless learning therapy for aphasia delivered via videos posted on YouTube. PMID:25945173
Duckneglect: video-games based neglect rehabilitation.
Mainetti, R; Sedda, A; Ronchetti, M; Bottini, G; Borghese, N A
2013-01-01
Video-games are becoming a common tool to guide patients through rehabilitation because of their power of motivating and engaging their users. Video-games may also be integrated into an infrastructure that allows patients, discharged from the hospital, to continue intensive rehabilitation at home under remote monitoring by the hospital itself, as suggested by the recently funded Rewire project. Goal of this work is to describe a novel low cost platform, based on video-games, targeted to neglect rehabilitation. The patient is guided to explore his neglected hemispace by a set of specifically designed games that ask him to reach targets, with an increasing level of difficulties. Visual and auditory cues helped the patient in the task and are progressively removed. A controlled randomization of scenarios, targets and distractors, a balanced reward system and music played in the background, all contribute to make rehabilitation more attractive, thus enabling intensive prolonged treatment. Results from our first patient, who underwent rehabilitation for half an hour, for five days a week for one month, showed on one side a very positive attitude of the patient towards the platform for the whole period, on the other side a significant improvement was obtained. Importantly, this amelioration was confirmed at a follow up evaluation five months after the last rehabilitation session and generalized to everyday life activities. Such a system could well be integrated into a home based rehabilitation system.
UAV field demonstration of social media enabled tactical data link
NASA Astrophysics Data System (ADS)
Olson, Christopher C.; Xu, Da; Martin, Sean R.; Castelli, Jonathan C.; Newman, Andrew J.
2015-05-01
This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Eadie, Leila; Mulhern, John; Regan, Luke; Mort, Alasdair; Shannon, Helen; Macaden, Ashish; Wilson, Philip
2017-01-01
Introduction Our aim is to expedite prehospital assessment of remote and rural patients using remotely-supported ultrasound and satellite/cellular communications. In this paradigm, paramedics are remotely-supported ultrasound operators, guided by hospital-based specialists, to record images before receiving diagnostic advice. Technology can support users in areas with little access to medical imaging and suboptimal communications coverage by connecting to multiple cellular networks and/or satellites to stream live ultrasound and audio-video. Methods An ambulance-based demonstrator system captured standard trauma and novel transcranial ultrasound scans from 10 healthy volunteers at 16 locations across the Scottish Highlands. Volunteers underwent brief scanning training before receiving expert guidance via the communications link. Ultrasound images were streamed with an audio/video feed to reviewers for interpretation. Two sessions were transmitted via satellite and 21 used cellular networks. Reviewers rated image and communication quality, and their utility for diagnosis. Transmission latency and bandwidth were recorded, and effects of scanner and reviewer experience were assessed. Results Appropriate views were provided in 94% of the simulated trauma scans. The mean upload rate was 835/150 kbps and mean latency was 114/2072 ms for cellular and satellite networks, respectively. Scanning experience had a significant impact on time to achieve a diagnostic image, and review of offline scans required significantly less time than live-streamed scans. Discussion This prehospital ultrasound system could facilitate early diagnosis and streamlining of treatment pathways for remote emergency patients, being particularly applicable in rural areas worldwide with poor communications infrastructure and extensive transport times.
Shima, Yoichiro; Suwa, Akina; Gomi, Yuichiro; Nogawa, Hiroki; Nagata, Hiroshi; Tanaka, Hiroshi
2007-01-01
Real-time video pictures can be transmitted inexpensively via a broadband connection using the DVTS (digital video transport system). However, the degradation of video pictures transmitted by DVTS has not been sufficiently evaluated. We examined the application of DVTS to remote consultation by using images of laparoscopic and endoscopic surgeries. A subjective assessment by the double stimulus continuous quality scale (DSCQS) method of the transmitted video pictures was carried out by eight doctors. Three of the four video recordings were assessed as being transmitted with no degradation in quality. None of the doctors noticed any degradation in the images due to encryption by the VPN (virtual private network) system. We also used an automatic picture quality assessment system to make an objective assessment of the same images. The objective DSCQS values were similar to the subjective ones. We conclude that although the quality of video pictures transmitted by the DVTS was slightly reduced, they were useful for clinical purposes. Encryption with a VPN did not degrade image quality.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
Heumann, Frederick K.; Wilkinson, Jay C.; Wooding, David R.
1997-01-01
A remote appliance for supporting a tool for performing work at a worksite on a substantially circular bore of a workpiece and for providing video signals of the worksite to a remote monitor comprising: a baseplate having an inner face and an outer face; a plurality of rollers, wherein each roller is rotatably and adjustably attached to the inner face of the baseplate and positioned to roll against the bore of the workpiece when the baseplate is positioned against the mouth of the bore such that the appliance may be rotated about the bore in a plane substantially parallel to the baseplate; a tool holding means for supporting the tool, the tool holding means being adjustably attached to the outer face of the baseplate such that the working end of the tool is positioned on the inner face side of the baseplate; a camera for providing video signals of the worksite to the remote monitor; and a camera holding means for supporting the camera on the inner face side of the baseplate, the camera holding means being adjustably attached to the outer face of the baseplate. In a preferred embodiment, roller guards are provided to protect the rollers from debris and a bore guard is provided to protect the bore from wear by the rollers and damage from debris.
Leading the Development of Concepts of Operations for Next-Generation Remotely Piloted Aircraft
2016-01-01
overarching CONOPS. RPAs must provide full motion video and signals intelli- gence (SIGINT) capabilities to fulfill their intelligence, surveillance, and...reached full capacity, combatant commanders had an insatiable demand for this new breed of capability, and phrases like Pred porn and drone strike...dimensional steering line on the video feed of the pilot’s head-up display (HUD) that would indicate turning cues and finite steering paths for optimal
A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.
Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C
2017-02-07
The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.
Spatial constraints of stereopsis in video displays
NASA Technical Reports Server (NTRS)
Schor, Clifton
1989-01-01
Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.
Nautilus at Risk – Estimating Population Size and Demography of Nautilus pompilius
Dunstan, Andrew; Bradshaw, Corey J. A.; Marshall, Justin
2011-01-01
The low fecundity, late maturity, long gestation and long life span of Nautilus suggest that this species is vulnerable to over-exploitation. Demand from the ornamental shell trade has contributed to their rapid decline in localized populations. More data from wild populations are needed to design management plans which ensure Nautilus persistence. We used a variety of techniques including capture-mark-recapture, baited remote underwater video systems, ultrasonic telemetry and remotely operated vehicles to estimate population size, growth rates, distribution and demographic characteristics of an unexploited Nautilus pompilius population at Osprey Reef (Coral Sea, Australia). We estimated a small and dispersed population of between 844 and 4467 individuals (14.6–77.4 km−2) dominated by males (83∶17 male∶female) and comprised of few juveniles (<10%).These results provide the first Nautilid population and density estimates which are essential elements for long-term management of populations via sustainable catch models. Results from baited remote underwater video systems provide confidence for their more widespread use to assess efficiently the size and density of exploited and unexploited Nautilus populations worldwide. PMID:21347360
Remote magnetic actuation using a clinical scale system
Stehning, Christian; Gleich, Bernhard
2018-01-01
Remote magnetic manipulation is a powerful technique for controlling devices inside the human body. It enables actuation and locomotion of tethered and untethered objects without the need for a local power supply. In clinical applications, it is used for active steering of catheters in medical interventions such as cardiac ablation for arrhythmia treatment and for steering of camera pills in the gastro-intestinal tract for diagnostic video acquisition. For these applications, specialized clinical-scale field applicators have been developed, which are rather limited in terms of field strength and flexibility of field application. For a general-purpose field applicator, flexible field generation is required at high field strengths as well as high field gradients to enable the generation of both torques and forces on magnetic devices. To date, this requirement has only been met by small-scale experimental systems. We have built a highly versatile clinical-scale field applicator that enables the generation of strong magnetic fields as well as strong field gradients over a large workspace. We demonstrate the capabilities of this coil-based system by remote steering of magnetic drills through gel and tissue samples with high torques on well-defined curved trajectories. We also give initial proof that, when equipped with high frequency transmit-receive coils, the machine is capable of real-time magnetic particle imaging while retaining a clinical-scale bore size. Our findings open the door for image-guided radiation-free remote magnetic control of devices at the clinical scale, which may be useful in minimally invasive diagnostic and therapeutic medical interventions. PMID:29494647
Remote Sensing Information Gateway, a tool that allows scientists, researchers and decision makers to access a variety of multi-terabyte, environmental datasets and to subset the data and obtain only needed variables, greatly improving the download time.
Winokur, T S; McClellan, S; Siegal, G P; Reddy, V; Listinsky, C M; Conner, D; Goldman, J; Grimes, G; Vaughn, G; McDonald, J M
1998-07-01
Routine diagnosis of pathology images transmitted over telecommunications lines remains an elusive goal. Part of the resistance stems from the difficulty of enabling image selection by the remote pathologist. To address this problem, a telepathology microscope system (TelePath, TeleMedicine Solutions, Birmingham, Ala) that has features associated with static and dynamic imaging systems was constructed. Features of the system include near real time image transmission, provision of a tiled overview image, free choice of any fields at any desired optical magnification, and automated tracking of the pathologist's image selection. All commands and images are discrete, avoiding many inherent problems of full motion video and continuous remote control. A set of 64 slides was reviewed by 3 pathologists in a simulated frozen section environment. Each pathologist provided diagnoses for all 64 slides, as well as qualitative information about the system. Thirty-one of 192 diagnoses disagreed with the reference diagnosis that had been reached before the trial began. Qf the 31, 13 were deferrals and 12 were diagnoses of cases that had a deferral as the reference diagnosis. In 6 cases, the diagnosis disagreed with the reference diagnosis yielding an overall accuracy of 96.9%. Confidence levels in the diagnoses were high. This trial suggests that this system provides high-quality anatomic pathology services, including intraoperative diagnoses, over telecommunications lines.
Designing and remotely testing mobile diabetes video games.
DeShazo, Jonathan; Harris, Lynne; Turner, Anne; Pratt, Wanda
2010-01-01
We have investigated game design and usability for three mobile phone video games designed to deliver diabetes education. The games were refined using focus groups. Six people with diabetes participated in the first focus group and five in the second. Following the focus groups, we incorporated the new findings into the game design, and then conducted a field test to evaluate the games in the context in which they would actually be used. Data were collected remotely about game usage by eight people with diabetes. The testers averaged 45 seconds per question and answered an average of 50 total nutrition questions each. They self-reported playing the game for 10-30 min, which coincided with the measured metrics of the game. Mobile games may represent a promising new way to engage the user and deliver relevant educational content.
Liteplo, Andrew S; Noble, Vicki E; Attwood, Ben H C
2011-11-01
As the use of point-of-care sonography spreads, so too does the need for remote expert over-reading via telesonogrpahy. We sought to assess the feasibility of using familiar, widespread, and cost-effective existent technology to allow remote over-reading of sonograms in real time and to compare 4 different methods of transmission and communication for both the feasibility of transmission and image quality. Sonographic video clips were transmitted using 2 different connections (WiFi and 3G) and via 2 different videoconferencing modalities (iChat [Apple Inc, Cupertino, CA] and Skype [Skype Software Sàrl, Luxembourg]), for a total of 4 different permutations. The clips were received at a remote location and recorded and then scored by expert reviewers for image quality, resolution, and detail. Wireless transmission of sonographic clips was feasible in all cases when WiFi was used and when Skype was used over a 3G connection. Images transmitted via a WiFi connection were statistically superior to those transmitted via 3G in all parameters of quality (average P = .031), and those sent by iChat were superior to those sent by Skype but not statistically so (average P = .057). Wireless transmission of sonographic video clips using inexpensive hardware, free videoconferencing software, and domestic Internet networks is feasible with retention of image quality sufficient for interpretation. WiFi transmission results in greater image quality than transmission by a 3G network.
Remotely Piloted Aircraft Systems as a Rhinoceros Anti-Poaching Tool in Africa
Mulero-Pázmány, Margarita; Stolper, Roel; van Essen, L. D.; Negro, Juan J.; Sassen, Tyrell
2014-01-01
Over the last years there has been a massive increase in rhinoceros poaching incidents, with more than two individuals killed per day in South Africa in the first months of 2013. Immediate actions are needed to preserve current populations and the agents involved in their protection are demanding new technologies to increase their efficiency in the field. We assessed the use of remotely piloted aircraft systems (RPAS) to monitor for poaching activities. We performed 20 flights with 3 types of cameras: visual photo, HD video and thermal video, to test the ability of the systems to detect (a) rhinoceros, (b) people acting as poachers and (c) to do fence surveillance. The study area consisted of several large game farms in KwaZulu-Natal province, South Africa. The targets were better detected at the lowest altitudes, but to operate the plane safely and in a discreet way, altitudes between 100 and 180 m were the most convenient. Open areas facilitated target detection, while forest habitats complicated it. Detectability using visual cameras was higher at morning and midday, but the thermal camera provided the best images in the morning and at night. Considering not only the technical capabilities of the systems but also the poacherś modus operandi and the current control methods, we propose RPAS usage as a tool for surveillance of sensitive areas, for supporting field anti-poaching operations, as a deterrent tool for poachers and as a complementary method for rhinoceros ecology research. Here, we demonstrate that low cost RPAS can be useful for rhinoceros stakeholders for field control procedures. There are, however, important practical limitations that should be considered for their successful and realistic integration in the anti-poaching battle. PMID:24416177
Remotely piloted aircraft systems as a rhinoceros anti-poaching tool in Africa.
Mulero-Pázmány, Margarita; Stolper, Roel; van Essen, L D; Negro, Juan J; Sassen, Tyrell
2014-01-01
Over the last years there has been a massive increase in rhinoceros poaching incidents, with more than two individuals killed per day in South Africa in the first months of 2013. Immediate actions are needed to preserve current populations and the agents involved in their protection are demanding new technologies to increase their efficiency in the field. We assessed the use of remotely piloted aircraft systems (RPAS) to monitor for poaching activities. We performed 20 flights with 3 types of cameras: visual photo, HD video and thermal video, to test the ability of the systems to detect (a) rhinoceros, (b) people acting as poachers and (c) to do fence surveillance. The study area consisted of several large game farms in KwaZulu-Natal province, South Africa. The targets were better detected at the lowest altitudes, but to operate the plane safely and in a discreet way, altitudes between 100 and 180 m were the most convenient. Open areas facilitated target detection, while forest habitats complicated it. Detectability using visual cameras was higher at morning and midday, but the thermal camera provided the best images in the morning and at night. Considering not only the technical capabilities of the systems but also the poacherś modus operandi and the current control methods, we propose RPAS usage as a tool for surveillance of sensitive areas, for supporting field anti-poaching operations, as a deterrent tool for poachers and as a complementary method for rhinoceros ecology research. Here, we demonstrate that low cost RPAS can be useful for rhinoceros stakeholders for field control procedures. There are, however, important practical limitations that should be considered for their successful and realistic integration in the anti-poaching battle.
Open-source telemedicine platform for wireless medical video communication.
Panayides, A; Eleftheriou, I; Pantziaris, M
2013-01-01
An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.
Open-Source Telemedicine Platform for Wireless Medical Video Communication
Panayides, A.; Eleftheriou, I.; Pantziaris, M.
2013-01-01
An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings. PMID:23573082
A novel interface for the telementoring of robotic surgery.
Shin, Daniel H; Dalag, Leonard; Azhar, Raed A; Santomauro, Michael; Satkunasivam, Raj; Metcalfe, Charles; Dunn, Matthew; Berger, Andre; Djaladat, Hooman; Nguyen, Mike; Desai, Mihir M; Aron, Monish; Gill, Inderbir S; Hung, Andrew J
2015-08-01
To prospectively evaluate the feasibility and safety of a novel, second-generation telementoring interface (Connect(™) ; Intuitive Surgical Inc., Sunnyvale, CA, USA) for the da Vinci robot. Robotic surgery trainees were mentored during portions of robot-assisted prostatectomy and renal surgery cases. Cases were assigned as traditional in-room mentoring or remote mentoring using Connect. While viewing two-dimensional, real-time video of the surgical field, remote mentors delivered verbal and visual counsel, using two-way audio and telestration (drawing) capabilities. Perioperative and technical data were recorded. Trainee robotic performance was rated using a validated assessment tool by both mentors and trainees. The mentoring interface was rated using a multi-factorial Likert-based survey. The Mann-Whitney and t-tests were used to determine statistical differences. We enrolled 55 mentored surgical cases (29 in-room, 26 remote). Perioperative variables of operative time and blood loss were similar between in-room and remote mentored cases. Robotic skills assessment showed no significant difference (P > 0.05). Mentors preferred remote over in-room telestration (P = 0.05); otherwise no significant difference existed in evaluation of the interfaces. Remote cases using wired (vs wireless) connections had lower latency and better data transfer (P = 0.005). Three of 18 (17%) wireless sessions were disrupted; one was converted to wired, one continued after restarting Connect, and the third was aborted. A bipolar injury to the colon occurred during one (3%) in-room mentored case; no intraoperative injuries were reported during remote sessions. In a tightly controlled environment, the Connect interface allows trainee robotic surgeons to be telementored in a safe and effective manner while performing basic surgical techniques. Significant steps remain prior to widespread use of this technology. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
Privacy enabling technology for video surveillance
NASA Astrophysics Data System (ADS)
Dufaux, Frédéric; Ouaret, Mourad; Abdeljaoued, Yousri; Navarro, Alfonso; Vergnenègre, Fabrice; Ebrahimi, Touradj
2006-05-01
In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transformdomain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.
Hazardous Environment Robotics
NASA Technical Reports Server (NTRS)
1996-01-01
Jet Propulsion Laboratory (JPL) developed video overlay calibration and demonstration techniques for ground-based telerobotics. Through a technology sharing agreement with JPL, Deneb Robotics added this as an option to its robotics software, TELEGRIP. The software is used for remotely operating robots in nuclear and hazardous environments in industries including automotive and medical. The option allows the operator to utilize video to calibrate 3-D computer models with the actual environment, and thus plan and optimize robot trajectories before the program is automatically generated.
A Robust Model-Based Coding Technique for Ultrasound Video
NASA Technical Reports Server (NTRS)
Docef, Alen; Smith, Mark J. T.
1995-01-01
This paper introduces a new approach to coding ultrasound video, the intended application being very low bit rate coding for transmission over low cost phone lines. The method exploits both the characteristic noise and the quasi-periodic nature of the signal. Data compression ratios between 250:1 and 1000:1 are shown to be possible, which is sufficient for transmission over ISDN and conventional phone lines. Preliminary results show this approach to be promising for remote ultrasound examinations.
NASA Technical Reports Server (NTRS)
Stute, Robert A. (Inventor); Galloway, F. Houston (Inventor); Medelius, Pedro J. (Inventor); Swindle, Robert W. (Inventor); Bierman, Tracy A. (Inventor)
1996-01-01
A remote monitor alarm system monitors discrete alarm and analog power supply voltage conditions at remotely located communications terminal equipment. A central monitoring unit (CMU) is connected via serial data links to each of a plurality of remote terminal units (RTUS) that monitor the alarm and power supply conditions of the remote terminal equipment. Each RTU can monitor and store condition information of both discrete alarm points and analog power supply voltage points in its associated communications terminal equipment. The stored alarm information is periodically transmitted to the CMU in response to sequential polling of the RTUS. The number of monitored alarm inputs and permissible voltage ranges for the analog inputs can be remotely configured at the CMU and downloaded into programmable memory at each RTU. The CMU includes a video display, a hard disk memory, a line printer and an audio alarm for communicating and storing the alarm information received from each RTU.
Remote observing with the Keck Telescopes from the U.S. mainland
NASA Astrophysics Data System (ADS)
Kibrick, Robert I.; Allen, Steve L.; Conrad, Albert
2000-06-01
We describe the current status of efforts to establish a high-bandwidth network from the U.S. mainland to Mauna Kea and a facility in California to support Keck remote observing and engineering via the Internet. The California facility will be an extension of the existing Keck remote operations facility located in Waimea, Hawaii. It will be targeted towards short-duration observing runs which now comprise roughly half of all scheduled science runs on the Keck Telescope. Keck technical staff in Hawaii will support remote observers on the mainland via video conferencing and collaborative software tools. Advantages and disadvantages of remote operation from California versus Hawaii are explored, and costs of alternative communication paths examined. We describe a plan for a backup communications path to protect against failure of the primary network. Alternative software models for remote operation are explored, and recent operational results described.
Improving the Capture and Re-Use of Data with Wearable Computers
NASA Technical Reports Server (NTRS)
Pfarr, Barbara; Fating, Curtis C.; Green, Daniel; Powers, Edward I. (Technical Monitor)
2001-01-01
At the Goddard Space Flight Center, members of the Real-Time Software Engineering Branch are developing a wearable, wireless, voice-activated computer for use in a wide range of crosscutting space applications that would benefit from having instant Internet, network, and computer access with complete mobility and hands-free operations. These applications can be applied across many fields and disciplines including spacecraft fabrication, integration and testing (including environmental testing), and astronaut on-orbit control and monitoring of experiments with ground based experimenters. To satisfy the needs of NASA customers, this wearable computer needs to be connected to a wireless network, to transmit and receive real-time video over the network, and to receive updated documents via the Internet or NASA servers. The voice-activated computer, with a unique vocabulary, will allow the users to access documentation in a hands free environment and interact in real-time with remote users. We will discuss wearable computer development, hardware and software issues, wireless network limitations, video/audio solutions and difficulties in language development.
Gupta, Sameer; Boehme, Jacqueline; Manser, Kelly; Dewar, Jannine; Miller, Amie; Siddiqui, Gina; Schwaitzberg, Steven D
2016-10-01
Background Google Glass has been used in a variety of medical settings with promising results. We explored the use and potential value of an asynchronous, near-real time protocol-which avoids transmission issues associated with real-time applications-for recording, uploading, and viewing of high-definition (HD) visual media in the emergency department (ED) to facilitate remote surgical consults. Study Design First-responder physician assistants captured pertinent aspects of the physical examination and diagnostic imaging using Google Glass' HD video or high-resolution photographs. This visual media were then securely uploaded to the study website. The surgical consultation then proceeded over the phone in the usual fashion and a clinical decision was made. The surgeon then accessed the study website to review the uploaded video. This was followed by a questionnaire regarding how the additional data impacted the consultation. Results The management plan changed in 24% (11) of cases after surgeons viewed the video. Five of these plans involved decision making regarding operative intervention. Although surgeons were generally confident in their initial management plan, confidence scores increased further in 44% (20) of cases. In addition, we surveyed 276 ED patients on their opinions regarding concerning the practice of health care providers wearing and using recording devices in the ED. The survey results revealed that the majority of patients are amenable to the addition of wearable technology with video functionality to their care. Conclusions This study demonstrates the potential value of a medically dedicated, hands-free, HD recording device with internet connectivity in facilitating remote surgical consultation. © The Author(s) 2016.
Aerospace video imaging systems for rangeland management
NASA Technical Reports Server (NTRS)
Everitt, J. H.; Escobar, D. E.; Richardson, A. J.; Lulla, K.
1990-01-01
This paper presents an overview on the application of airborne video imagery (VI) for assessment of rangeland resources. Multispectral black-and-white video with visible/NIR sensitivity; color-IR, normal color, and black-and-white MIR; and thermal IR video have been used to detect or distinguish among many rangeland and other natural resource variables such as heavy grazing, drought-stressed grass, phytomass levels, burned areas, soil salinity, plant communities and species, and gopher and ant mounds. The digitization and computer processing of VI have also been demonstrated. VI does not have the detailed resolution of film, but these results have shown that it has considerable potential as an applied remote sensing tool for rangeland management. In the future, spaceborne VI may provide additional data for monitoring and management of rangelands.
NASA Technical Reports Server (NTRS)
Hartley, Craig S.
1990-01-01
To augment the capabilities of the Space Transportation System, NASA has funded studies and developed programs aimed at developing reusable, remotely piloted spacecraft and satellite servicing systems capable of delivering, retrieving, and servicing payloads at altitudes and inclinations beyond the reach of the present Shuttle Orbiters. Since the mid 1970's, researchers at the Martin Marietta Astronautics Group Space Operations Simulation (SOS) Laboratory have been engaged in investigations of remotely piloted and supervised autonomous spacecraft operations. These investigations were based on high fidelity, real-time simulations and have covered a wide range of human factors issues related to controllability. Among these are: (1) mission conditions, including thruster plume impingements and signal time delays; (2) vehicle performance variables, including control authority, control harmony, minimum impulse, and cross coupling of accelerations; (3) maneuvering task requirements such as target distance and dynamics; (4) control parameters including various control modes and rate/displacement deadbands; and (5) display parameters involving camera placement and function, visual aids, and presentation of operational feedback from the spacecraft. This presentation includes a brief description of the capabilities of the SOS Lab to simulate real-time free-flyer operations using live video, advanced technology ground and on-orbit workstations, and sophisticated computer models of on-orbit spacecraft behavior. Sample results from human factors studies in the five categories cited above are provided.
[STS-31 Onboard 16mm Photography Quick Release]. [Onboard Activities
NASA Technical Reports Server (NTRS)
1990-01-01
This video features scenes shot by the crew of onboard activities including Hubble Space Telescope deploy, remote manipulator system (RMS) checkout, flight deck and middeck experiments, and Earth and payload bay views.
Detection of inter-frame forgeries in digital videos.
K, Sitara; Mehtre, B M
2018-05-26
Videos are acceptable as evidence in the court of law, provided its authenticity and integrity are scientifically validated. Videos recorded by surveillance systems are susceptible to malicious alterations of visual content by perpetrators locally or remotely. Such malicious alterations of video contents (called video forgeries) are categorized into inter-frame and intra-frame forgeries. In this paper, we propose inter-frame forgery detection techniques using tamper traces from spatio-temporal and compressed domains. Pristine videos containing frames that are recorded during sudden camera zooming event, may get wrongly classified as tampered videos leading to an increase in false positives. To address this issue, we propose a method for zooming detection and it is incorporated in video tampering detection. Frame shuffling detection, which was not explored so far is also addressed in our work. Our method is capable of differentiating various inter-frame tamper events and its localization in the temporal domain. The proposed system is tested on 23,586 videos of which 2346 are pristine and rest of them are candidates of inter-frame forged videos. Experimental results show that we have successfully detected frame shuffling with encouraging accuracy rates. We have achieved improved accuracy on forgery detection in frame insertion, frame deletion and frame duplication. Copyright © 2018. Published by Elsevier B.V.
AlliedSignal driver's viewer enhancement (DVE) for paramilitary and commercial applications
NASA Astrophysics Data System (ADS)
Emanuel, Michael; Caron, Hubert; Kovacevic, Branislav; Faina-Cherkaoui, Marcela; Wrobel, Leslie; Turcotte, Gilles
1999-07-01
AlliedSignal Driver's Viewer Enhancement (DVE) system is a thermal imager using a 320 X 240 uncooled microbolometer array. This high performance system was initially developed for military combat and tactical wheeled vehicles. It features a very small sensor head remotely mounted from the display, control and processing module. The sensor head has a modular design and is being adapted to various commercial applications such as truck and car-driving aid, using specifically designed low cost optics. Tradeoffs in the system design, system features and test results are discussed in this paper. A short video shows footage of the DVE system while driving at night.
Real-Time Internet Connections: Implications for Surgical Decision Making in Laparoscopy
Broderick, Timothy J.; Harnett, Brett M.; Doarn, Charles R.; Rodas, Edgar B.; Merrell, Ronald C.
2001-01-01
Objective To determine whether a low-bandwidth Internet connection can provide adequate image quality to support remote real-time surgical consultation. Summary Background Data Telemedicine has been used to support care at a distance through the use of expensive equipment and broadband communication links. In the past, the operating room has been an isolated environment that has been relatively inaccessible for real-time consultation. Recent technological advances have permitted videoconferencing over low-bandwidth, inexpensive Internet connections. If these connections are shown to provide adequate video quality for surgical applications, low-bandwidth telemedicine will open the operating room environment to remote real-time surgical consultation. Methods Surgeons performing a laparoscopic cholecystectomy in Ecuador or the Dominican Republic shared real-time laparoscopic images with a panel of surgeons at the parent university through a dial-up Internet account. The connection permitted video and audio teleconferencing to support real-time consultation as well as the transmission of real-time images and store-and-forward images for observation by the consultant panel. A total of six live consultations were analyzed. In addition, paired local and remote images were “grabbed” from the video feed during these laparoscopic cholecystectomies. Nine of these paired images were then placed into a Web-based tool designed to evaluate the effect of transmission on image quality. Results The authors showed for the first time the ability to identify critical anatomic structures in laparoscopy over a low-bandwidth connection via the Internet. The consultant panel of surgeons correctly remotely identified biliary and arterial anatomy during six laparoscopic cholecystectomies. Within the Web-based questionnaire, 15 surgeons could not blindly distinguish the quality of local and remote laparoscopic images. Conclusions Low-bandwidth, Internet-based telemedicine is inexpensive, effective, and almost ubiquitous. Use of these inexpensive, portable technologies will allow sharing of surgical procedures and decisions regardless of location. Internet telemedicine consistently supported real-time intraoperative consultation in laparoscopic surgery. The implications are broad with respect to quality improvement and diffusion of knowledge as well as for basic consultation. PMID:11505061
Vital physical signals measurements using a webcam
NASA Astrophysics Data System (ADS)
Ouyang, Jianfei; Yan, Yonggang; Yao, Lifeng
2013-10-01
Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. In this paper, we provide a new video-based methodology for remote and fast measurements of vital physical signals such as cardiac pulse and breathing rate. A webcam is used to track color video of a human face or wrist, and a Photoplethysmography (PPG) technique is applied to perform the measurements of the vital signals. A novel sequential blind signal extraction methodology is applied to the color video under normal lighting conditions, based on correlation analysis between the green trace and the source signals. The approach is successfully applied in the measurement of vital signals under the condition of different illuminating in which the target signal can also be found out accurately. To assess the advantages, the measuring time of a large number of cases is recorded correctly. The experimental results show that it only takes less than 30 seconds to measure the vital physical signals using presented technique. The study indicates the proposed approach is feasible for PPG technique, which provides a way to study the relationship of the signal for different ROI in future research.
Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology
NASA Astrophysics Data System (ADS)
Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.
2014-02-01
Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.
The Feasibility and Acceptability of Google Glass for Teletoxicology Consults.
Chai, Peter R; Babu, Kavita M; Boyer, Edward W
2015-09-01
Teletoxicology offers the potential for toxicologists to assist in providing medical care at remote locations, via remote, interactive augmented audiovisual technology. This study examined the feasibility of using Google Glass, a head-mounted device that incorporates a webcam, viewing prism, and wireless connectivity, to assess the poisoned patient by a medical toxicology consult staff. Emergency medicine residents (resident toxicology consultants) rotating on the toxicology service wore Glass during bedside evaluation of poisoned patients; Glass transmitted real-time video of patients' physical examination findings to toxicology fellows and attendings (supervisory consultants), who reviewed these findings. We evaluated the usability (e.g., quality of connectivity and video feeds) of Glass by supervisory consultants, as well as attitudes towards use of Glass. Resident toxicology consultants and supervisory consultants completed 18 consults through Glass. Toxicologists viewing the video stream found the quality of audio and visual transmission usable in 89 % of cases. Toxicologists reported their management of the patient changed after viewing the patient through Glass in 56 % of cases. Based on findings obtained through Glass, toxicologists recommended specific antidotes in six cases. Head-mounted devices like Google Glass may be effective tools for real-time teletoxicology consultation.
Satellite and mobile wireless transmission of focused assessment with sonography in trauma.
Strode, Christofer A; Rubal, Bernard J; Gerhardt, Robert T; Christopher, Frank L; Bulgrin, James R; Kinkler, E Sterling; Bauch, Terry D; Boyd, Sheri Y N
2003-12-01
Focused assessment with sonography in trauma (FAST) can define life-threatening injuries in austere settings with remote real-time review by experienced physicians. This study evaluates vest-mounted microwave, satellite, and LifeLink communications technology for image clarity and diagnostic accuracy during remote transmission of FAST examinations. Using a SonoSite, FAST was obtained on three patients with pericardial and intraperitoneal effusions and two control subjects in a remotely located U.S. Army Combat Support Hospital. A miniature vest-mounted video transmitter attached to the SonoSite sent wireless ultrasound video 20 m to a receiving antenna. The signal was then transferred over VSAT satellite systems at 512 kilobaud per second (kbps), INMARSAT satellite systems at 64 kbps, and over LifeLink on a moving ambulance through a metropolitan wireless traffic-management network. Clarity and absence or presence of effusions were recorded by 15 staff emergency physicians. Average sensitivity, specificity, and accuracy were 87% (95% confidence interval [CI]=79% to 95%), 85% (95% CI=81% to 89%), and 86% (95% CI=82% to 90%) for the Premier Wireless Vest; 98% (95% CI=97% to 99%), 83% (95% CI=75% to 91%), and 86% (95% CI=82% to 90%) for VSAT; 95% (95% CI=94% to 96%), 70% (95% CI=58% to 82%), and 75% (95% CI=70% to 80%) for INMARSAT; and 82% (95% CI=73% to 91%), 83% (95% CI=74% to 92%), and 82% (95% CI=78% to 86%) for LifeLink with clarity of 3.0 (95% CI=2.7 to 3.3), 2.9 (95% CI=2.6 to 3.2), 1.3 (95% CI=1.2 to 1.4), and 2.1 (95% CI=1.8 to 2.4), respectively. Accuracy correlated with clarity. Roaming vest transmission of FAST provides interpretable, diagnostic imagery at the distances used in this study. VSAT provided the best clarity and diagnostic value with the lighter, more portable INMARSAT serving a lesser role for remote clinical interpretation. LifeLink performed well, and further infrastructure improvements may increase clarity and accuracy.
Heumann, F.K.; Wilkinson, J.C.; Wooding, D.R.
1997-12-16
A remote appliance for supporting a tool for performing work at a work site on a substantially circular bore of a work piece and for providing video signals of the work site to a remote monitor comprises: a base plate having an inner face and an outer face; a plurality of rollers, wherein each roller is rotatably and adjustably attached to the inner face of the base plate and positioned to roll against the bore of the work piece when the base plate is positioned against the mouth of the bore such that the appliance may be rotated about the bore in a plane substantially parallel to the base plate; a tool holding means for supporting the tool, the tool holding means being adjustably attached to the outer face of the base plate such that the working end of the tool is positioned on the inner face side of the base plate; a camera for providing video signals of the work site to the remote monitor; and a camera holding means for supporting the camera on the inner face side of the base plate, the camera holding means being adjustably attached to the outer face of the base plate. In a preferred embodiment, roller guards are provided to protect the rollers from debris and a bore guard is provided to protect the bore from wear by the rollers and damage from debris. 5 figs.
An Augmented Reality-Based Approach for Surgical Telementoring in Austere Environments.
Andersen, Dan; Popescu, Voicu; Cabrera, Maria Eugenia; Shanghavi, Aditya; Mullis, Brian; Marley, Sherri; Gomez, Gerardo; Wachs, Juan P
2017-03-01
Telementoring can improve treatment of combat trauma injuries by connecting remote experienced surgeons with local less-experienced surgeons in an austere environment. Current surgical telementoring systems force the local surgeon to regularly shift focus away from the operating field to receive expert guidance, which can lead to surgery delays or even errors. The System for Telementoring with Augmented Reality (STAR) integrates expert-created annotations directly into the local surgeon's field of view. The local surgeon views the operating field by looking at a tablet display suspended between the patient and the surgeon that captures video of the surgical field. The remote surgeon remotely adds graphical annotations to the video. The annotations are sent back and displayed to the local surgeon while being automatically anchored to the operating field elements they describe. A technical evaluation demonstrates that STAR robustly anchors annotations despite tablet repositioning and occlusions. In a user study, participants used either STAR or a conventional telementoring system to precisely mark locations on a surgical simulator under a remote surgeon's guidance. Participants who used STAR completed the task with fewer focus shifts and with greater accuracy. The STAR reduces the local surgeon's need to shift attention during surgery, allowing him or her to continuously work while looking "through" the tablet screen. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
NASA Astrophysics Data System (ADS)
Whitcomb, L. L.; Bowen, A. D.; Yoerger, D.; German, C. R.; Kinsey, J. C.; Mayer, L. A.; Jakuba, M. V.; Gomez-Ibanez, D.; Taylor, C. L.; Machado, C.; Howland, J. C.; Kaiser, C. L.; Heintz, M.; Pontbriand, C.; Suman, S.; O'hara, L.
2013-12-01
The Woods Hole Oceanographic Institution and collaborators from the Johns Hopkins University and the University of New Hampshire are developing for the Polar Science Community a remotely-controlled underwater robotic vehicle capable of being tele-operated under ice under remote real-time human supervision. The Nereid Under-Ice (Nereid-UI) vehicle will enable exploration and detailed examination of biological and physical environments at glacial ice-tongues and ice-shelf margins, delivering high-definition video in addition to survey data from on board acoustic, chemical, and biological sensors. Preliminary propulsion system testing indicates the vehicle will be able to attain standoff distances of up to 20 km from an ice-edge boundary, as dictated by the current maximum tether length. The goal of the Nereid-UI system is to provide scientific access to under-ice and ice-margin environments that is presently impractical or infeasible. FIBER-OPTIC TETHER: The heart of the Nereid-UI system is its expendable fiber optic telemetry system. The telemetry system utilizes many of the same components pioneered for the full-ocean depth capable HROV Nereus vehicle, with the addition of continuous fiber status monitoring, and new float-pack and depressor designs that enable single-body deployment. POWER SYSTEM: Nereid-UI is powered by a pressure-tolerant lithium-ion battery system composed of 30 Ah prismatic pouch cells, arranged on a 90 volt bus and capable of delivering 15 kW. The cells are contained in modules of 8 cells, and groups of 9 modules are housed together in oil-filled plastic boxes. The power distribution system uses pressure tolerant components extensively, each of which have been individually qualified to 10 kpsi and operation between -20 C and 40 C. THRUSTERS: Nereid-UI will employ eight identical WHOI-designed thrusters, each with a frameless motor, oil-filled and individually compensated, and designed for low-speed (500 rpm max) direct drive. We expect an end-to-end propulsive efficiency of between 0.3 and 0.4 at a transit speed of 1 m/s based on testing conducted at WHOI. CAMERAS: Video imagery is one of the principal products of Nereid-UI. Two fiber-optic telemetry wavelengths deliver 1.5 Gb/s uncompressed HDSDI video to the support vessel in real time, supporting a Kongsberg OE14-522 hyperspherical pan and tilt HD camera and several utility cameras. PROJECT STATUS: The first shallow-water vehicle trials are scheduled for September 2013. The trials are designed to test core vehicle systems particularly the power system, main computer and control system, thrusters, video and telemetry system, and to refine camera, lighting and acoustic sensor placement for piloted and closed-loop control, especially as pertains to working near the underside of ice. Remaining vehicle design tasks include finalizing the single-body deployment concept and depressor, populating the scientific sensing suite, and the software development necessary to implement the planned autonomous return strategy. Final design and fabrication for these remaining components of the vehicle system will proceed through fall 2013, with trials under lake ice in early 2014, and potential polar trials beginning in 2014-15. SUPPORT: NSF OPP (ANT-1126311), WHOI, James Family Foundation, and George Frederick Jewett Foundation East.
NASA Technical Reports Server (NTRS)
Talley, Tom
2003-01-01
Johnson Space Center (JSC) is designing a small, remotely controlled vehicle that will carry two color and one black and white video cameras in space. The device will launch and retrieve from the Space Vehicle and be used for remote viewing. Off the shelf cellular technology is being used as the basis for communication system design. Existing plans include using multiple antennas to make simultaneous estimates of the azimuth of the MiniAERCam from several sites on the Space Station and use triangulation to find the location of the device. Adding range detection capability to each of the nodes on the Space Vehicle would allow an estimate of the location of the MiniAERCam to be made at each Communication And Telemetry Box (CATBox) independent of all the other communication nodes. This project will investigate the techniques used by the Global Positioning System (GPS) to achieve accurate positioning information and adapt those strategies that are appropriate to the design of the CATBox range determination system.
Store-and-feedforward adaptive gaming system for hand-finger motion tracking in telerehabilitation.
Lockery, Daniel; Peters, James F; Ramanna, Sheela; Shay, Barbara L; Szturm, Tony
2011-05-01
This paper presents a telerehabilitation system that encompasses a webcam and store-and-feedforward adaptive gaming system for tracking finger-hand movement of patients during local and remote therapy sessions. Gaming-event signals and webcam images are recorded as part of a gaming session and then forwarded to an online healthcare content management system (CMS) that separates incoming information into individual patient records. The CMS makes it possible for clinicians to log in remotely and review gathered data using online reports that are provided to help with signal and image analysis using various numerical measures and plotting functions. Signals from a 6 degree-of-freedom magnetic motion tracking system provide a basis for video-game sprite control. The MMT provides a path for motion signals between common objects manipulated by a patient and a computer game. During a therapy session, a webcam that captures images of the hand together with a number of performance metrics provides insight into the quality, efficiency, and skill of a patient.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
The Use of Smart Glasses for Surgical Video Streaming.
Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu
2017-04-01
Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.
Password-free network security through joint use of audio and video
NASA Astrophysics Data System (ADS)
Civanlar, Mehmet R.; Chen, Tsuhan
1997-01-01
REmote authentication is vital for many network based applications. As the number of such applications increases, user friendliness of the authentication process, particularly as it relates to password management, becomes as important as its reliability. The multimedia capabilities of the modern terminal equipment can provide the basis for a dependable and easy to use authentication system which does not require the user to memorize passwords. This paper outlines our implementation of an authentication system based on the joint use of the speech and facial video of a user. Our implementation shows that the voice and the video of the associated lip movements, when used together, can be very effective for password free authentication.
Video-based noncooperative iris image segmentation.
Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig
2011-02-01
In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, C.S.; af Ekenstam, G.; Sallstrom, M.
1995-07-01
The Swedish Nuclear Power Inspectorate (SKI) and the US Department of Energy (DOE) sponsored work on a Remote Monitoring System (RMS) that was installed in August 1994 at the Barseback Works north of Malmo, Sweden. The RMS was designed to test the front end detection concept that would be used for unattended remote monitoring activities. Front end detection reduces the number of video images recorded and provides additional sensor verification of facility operations. The function of any safeguards Containment and Surveillance (C/S) system is to collect information which primarily is images that verify the operations at a nuclear facility. Barsebackmore » is ideal to test the concept of front end detection since most activities of safeguards interest is movement of spent fuel which occurs once a year. The RMS at Barseback uses a network of nodes to collect data from microwave motion detectors placed to detect the entrance and exit of spent fuel casks through a hatch. A video system using digital compression collects digital images and stores them on a hard drive and a digital optical disk. Data and images from the storage area are remotely monitored via telephone from Stockholm, Sweden and Albuquerque, NM, USA. These remote monitoring stations operated by SKI and SNL respectively, can retrieve data and images from the RMS computer at the Barseback Facility. The data and images are encrypted before transmission. This paper presents details of the RMS and test results of this approach to front end detection of safeguard activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Anthony; Ravi, Ananth
2014-08-15
High dose rate (HDR) remote afterloading brachytherapy involves sending a small, high-activity radioactive source attached to a cable to different positions within a hollow applicator implanted in the patient. It is critical that the source position within the applicator and the dwell time of the source are accurate. Daily quality assurance (QA) tests of the positional and dwell time accuracy are essential to ensure that the accuracy of the remote afterloader is not compromised prior to patient treatment. Our centre has developed an automated, video-based QA system for HDR brachytherapy that is dramatically superior to existing diode or film QAmore » solutions in terms of cost, objectivity, positional accuracy, with additional functionalities such as being able to determine source dwell time and transit time of the source. In our system, a video is taken of the brachytherapy source as it is sent out through a position check ruler, with the source visible through a clear window. Using a proprietary image analysis algorithm, the source position is determined with respect to time as it moves to different positions along the check ruler. The total material cost of the video-based system was under $20, consisting of a commercial webcam and adjustable stand. The accuracy of the position measurement is ±0.2 mm, and the time resolution is 30 msec. Additionally, our system is capable of robustly verifying the source transit time and velocity (a test required by the AAPM and CPQR recommendations), which is currently difficult to perform accurately.« less
NASA Astrophysics Data System (ADS)
Carr, Bob; Knowles, John; Warren, Jeremy
2008-10-01
We describe the continuing development of a laser-based, light scattering detector system capable of detecting and analysing liquid-borne nanoparticles. Using a finely focussed and specially configured laser beam to illuminate a suspension of nanoparticles in a small (250ul) sample and videoing the Brownian motion of each and every particle in the detection zone should allow individual but simultaneous detection and measurement of particle size, scattered light intensity, electrophoretic mobility and, where applicable, shape asymmetry. This real-time, multi-parameter analysis capability offers the prospect of reagentlessly differentiating between different particle types within a complex sample of potentially high and variable background. Employing relatively low powered (50-100mW) laser diode modules and low resolution CCD arrays, each component could be run off battery power, allowing distributed/remote or personal deployment. Voltages needed for electrophoresis measurement s would be similarly low (e.g. 20V, low current) and 30second videos (exported at mobile/cell phone download speeds) analysed remotely. The potential of such low-cost technology as a field-deployable grid of remote, battery powered and reagentless, multi-parameter sensors for use as trigger devices is discussed.
Robustness of remote stress detection from visible spectrum recordings
NASA Astrophysics Data System (ADS)
Kaur, Balvinder; Moses, Sophia; Luthra, Megha; Ikonomidou, Vasiliki N.
2016-05-01
In our recent work, we have shown that it is possible to extract high fidelity timing information of the cardiac pulse wave from visible spectrum videos, which can then be used as a basis for stress detection. In that approach, we used both heart rate variability (HRV) metrics and the differential pulse transit time (dPTT) as indicators of the presence of stress. One of the main concerns in this analysis is its robustness in the presence of noise, as the remotely acquired signal that we call blood wave (BW) signal is degraded with respect to the signal acquired using contact sensors. In this work, we discuss the robustness of our metrics in the presence of multiplicative noise. Specifically, we study the effects of subtle motion due to respiration and changes in illumination levels due to light flickering on the BW signal, the HRV-driven features, and the dPTT. Our sensitivity study involved both Monte Carlo simulations and experimental data from human facial videos, and indicates that our metrics are robust even under moderate amounts of noise. Generated results will help the remote stress detection community with developing requirements for visual spectrum based stress detection systems.
Multispectral, Fluorescent and Photoplethysmographic Imaging for Remote Skin Assessment
Spigulis, Janis
2017-01-01
Optical tissue imaging has several advantages over the routine clinical imaging methods, including non-invasiveness (it does not change the structure of tissues), remote operation (it avoids infections) and the ability to quantify the tissue condition by means of specific image parameters. Dermatologists and other skin experts need compact (preferably pocket-size), self-sustaining and easy-to-use imaging devices. The operational principles and designs of ten portable in-vivo skin imaging prototypes developed at the Biophotonics Laboratory of Institute of Atomic Physics and Spectroscopy, University of Latvia during the recent five years are presented in this paper. Four groups of imaging devices are considered. Multi-spectral imagers offer possibilities for distant mapping of specific skin parameters, thus facilitating better diagnostics of skin malformations. Autofluorescence intensity and photobleaching rate imagers show a promising potential for skin tumor identification and margin delineation. Photoplethysmography video-imagers ensure remote detection of cutaneous blood pulsations and can provide real-time information on cardiovascular parameters and anesthesia efficiency. Multimodal skin imagers perform several of the abovementioned functions by taking a number of spectral and video images with the same image sensor. Design details of the developed prototypes and results of clinical tests illustrating their functionality are presented and discussed. PMID:28534815
Telepathology in cytopathology: challenges and opportunities.
Collins, Brian T
2013-01-01
Telepathology in cytopathology is becoming more commonly utilized, and newer technologic infrastructures afford the laboratory a variety of options. The options and design of a telepathology system are driven by the clinical needs. This is primarily focused on providing rapid on-site evaluation service for fine needle aspiration. The clinical requirements and needs of a system are described. Available tools to design and implement a telepathology system are covered, including methods of image capture, network connectivity and remote viewing options. The primary telepathology method currently used and described involves the delivery via a network connection of a live video image to a remote site which is passively viewed by an internet web-based browser. By utilizing live video information and a voice connection to the on-site location, the remote viewer can collect clinical information and direct their view of the slides. Telepathology systems for use in cytopathology can be designed and implemented with commercially available infrastructure. It is necessary for the laboratory to validate the designed system and adhere to the required regulatory requirements. Telepathology for cytopathology can be reliably utilized by adapting existing technology, and newer advances hold great promise for further applications in the cytopathology laboratory. Copyright © 2013 S. Karger AG, Basel.
Crew Field Notes: A New Tool for Planetary Surface Exploration
NASA Technical Reports Server (NTRS)
Horz, Friedrich; Evans, Cynthia; Eppler, Dean; Gernhardt, Michael; Bluethmann, William; Graf, Jodi; Bleisath, Scott
2011-01-01
The Desert Research and Technology Studies (DRATS) field tests of 2010 focused on the simultaneous operation of two rovers, a historical first. The complexity and data volume of two rovers operating simultaneously presented significant operational challenges for the on-site Mission Control Center, including the real time science support function. The latter was split into two "tactical" back rooms, one for each rover, that supported the real time traverse activities; in addition, a "strategic" science team convened overnight to synthesize the day's findings, and to conduct the strategic forward planning of the next day or days as detailed in [1, 2]. Current DRATS simulations and operations differ dramatically from those of Apollo, including the most evolved Apollo 15-17 missions, due to the advent of digital technologies. Modern digital still and video cameras, combined with the capability for real time transmission of large volumes of data, including multiple video streams, offer the prospect for the ground based science support room(s) in Mission Control to witness all crew activities in unprecedented detail and in real time. It was not uncommon during DRATS 2010 that each tactical science back room simultaneously received some 4-6 video streams from cameras mounted on the rover or the crews' backpacks. Some of the rover cameras are controllable PZT (pan, zoom, tilt) devices that can be operated by the crews (during extensive drives) or remotely by the back room (during EVAs). Typically, a dedicated "expert" and professional geologist in the tactical back room(s) controls, monitors and analyses a single video stream and provides the findings to the team, commonly supported by screen-saved images. It seems obvious, that the real time comprehension and synthesis of the verbal descriptions, extensive imagery, and other information (e.g. navigation data; time lines etc) flowing into the science support room(s) constitute a fundamental challenge to future mission operations: how can one analyze, comprehend and synthesize -in real time- the enormous data volume coming to the ground? Real time understanding of all data is needed for constructive interaction with the surface crews, and it becomes critical for the strategic forward planning process.
Remote operation of the Black Knight unmanned ground combat vehicle
NASA Astrophysics Data System (ADS)
Valois, Jean-Sebastien; Herman, Herman; Bares, John; Rice, David P.
2008-04-01
The Black Knight is a 12-ton, C-130 deployable Unmanned Ground Combat Vehicle (UGCV). It was developed to demonstrate how unmanned vehicles can be integrated into a mechanized military force to increase combat capability while protecting Soldiers in a full spectrum of battlefield scenarios. The Black Knight is used in military operational tests that allow Soldiers to develop the necessary techniques, tactics, and procedures to operate a large unmanned vehicle within a mechanized military force. It can be safely controlled by Soldiers from inside a manned fighting vehicle, such as the Bradley Fighting Vehicle. Black Knight control modes include path tracking, guarded teleoperation, and fully autonomous movement. Its state-of-the-art Autonomous Navigation Module (ANM) includes terrain-mapping sensors for route planning, terrain classification, and obstacle avoidance. In guarded teleoperation mode, the ANM data, together with automotive dials and gages, are used to generate video overlays that assist the operator for both day and night driving performance. Remote operation of various sensors also allows Soldiers to perform effective target location and tracking. This document covers Black Knight's system architecture and includes implementation overviews of the various operation modes. We conclude with lessons learned and development goals for the Black Knight UGCV.
Visible-Light-Driven BiOI-Based Janus Micromotor in Pure Water.
Dong, Renfeng; Hu, Yan; Wu, Yefei; Gao, Wei; Ren, Biye; Wang, Qinglong; Cai, Yuepeng
2017-02-08
Light-driven synthetic micro-/nanomotors have attracted considerable attention due to their potential applications and unique performances such as remote motion control and adjustable velocity. Utilizing harmless and renewable visible light to supply energy for micro-/nanomotors in water represents a great challenge. In view of the outstanding photocatalytic performance of bismuth oxyiodide (BiOI), visible-light-driven BiOI-based Janus micromotors have been developed, which can be activated by a broad spectrum of light, including blue and green light. Such BiOI-based Janus micromotors can be propelled by photocatalytic reactions in pure water under environmentally friendly visible light without the addition of any other chemical fuels. The remote control of photocatalytic propulsion by modulating the power of visible light is characterized by velocity and mean-square displacement analysis of optical video recordings. In addition, the self-electrophoresis mechanism has been confirmed for such visible-light-driven BiOI-based Janus micromotors by demonstrating the effects of various coated layers (e.g., Al 2 O 3 , Pt, and Au) on the velocity of motors. The successful demonstration of visible-light-driven Janus micromotors holds a great promise for future biomedical and environmental applications.
NASA Technical Reports Server (NTRS)
Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.
2003-01-01
Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.
Fuzzy control system for a remote focusing microscope
NASA Astrophysics Data System (ADS)
Weiss, Jonathan J.; Tran, Luc P.
1992-01-01
Space Station Crew Health Care System procedures require the use of an on-board microscope whose slide images will be transmitted for analysis by ground-based microbiologists. Focusing of microscope slides is low on the list of crew priorities, so NASA is investigating the option of telerobotic focusing controlled by the microbiologist on the ground, using continuous video feedback. However, even at Space Station distances, the transmission time lag may disrupt the focusing process, severely limiting the number of slides that can be analyzed within a given bandwidth allocation. Substantial time could be saved if on-board automation could pre-focus each slide before transmission. The authors demonstrate the feasibility of on-board automatic focusing using a fuzzy logic ruled-based system to bring the slide image into focus. The original prototype system was produced in under two months and at low cost. Slide images are captured by a video camera, then digitized by gray-scale value. A software function calculates an index of 'sharpness' based on gray-scale contrasts. The fuzzy logic rule-based system uses feedback to set the microscope's focusing control in an attempt to maximize sharpness. The systems as currently implemented performs satisfactorily in focusing a variety of slide types at magnification levels ranging from 10 to 1000x. Although feasibility has been demonstrated, the system's performance and usability could be improved substantially in four ways: by upgrading the quality and resolution of the video imaging system (including the use of full color); by empirically defining and calibrating the index of image sharpness; by letting the overall focusing strategy vary depending on user-specified parameters; and by fine-tuning the fuzzy rules, set definitions, and procedures used.
Williams, Kristine; Blyler, Diane; Vidoni, Eric D; Shaw, Clarissa; Wurth, JoEllen; Seabold, Denise; Perkhounkova, Yelena; Van Sciver, Angela
2018-06-01
The number of persons with dementia (PWD) in the United States is expected to reach 16 million by 2050. Due to the behavioral and psychological symptoms of dementia, caregivers face challenging in-home care situations that lead to a range of negative health outcomes such as anxiety and depression for the caregivers and nursing home placement for PWD. Supporting Family Caregivers with Technology for Dementia Home Care (FamTechCare) is a multisite randomized controlled trial evaluating the effects of a telehealth intervention on caregiver well-being and PWD behavioral symptoms. The FamTechCare intervention provides individualized dementia-care strategies to in-home caregivers based on video recordings that the caregiver creates of challenging care situations. A team of dementia care experts review videos submitted by caregivers and provide interventions to improve care weekly for the experimental group. Caregivers in the control group receive feedback for improving care based on a weekly phone call with the interventionist and receive feedback on their videos at the end of the 3-month study. Using linear mixed modeling, we will compare experimental and control group outcomes (PWD behavioral symptoms and caregiver burden) after 1 and 3 months. An exploratory descriptive design will identify a typology of interventions for telehealth support for in-home dementia caregivers. Finally, the cost for FamTechCare will be determined and examined in relation to hypothesized effects on PWD behavioral symptoms, placement rates, and caregiver burden. This research will provide the foundation for future research for telehealth interventions with this population, especially for families in rural or remote locations. © 2018 Wiley Periodicals, Inc.
The Ottawa telehealth project.
Cheung, S T; Davies, R F; Smith, K; Marsh, R; Sherrard, H; Keon, W J
1998-01-01
To examine the telehealth system as a means of improving access to cardiac consultations and specialized health services in remote areas of Ontario. The University of Ottawa Heart Institute has set up a telehealth test program, Healthcare and Education Access for Remote Residents by Telecommunications (HEARRT), in collaboration with industry and the provincial and federal government, as well as several remote clinical test sites. The program makes off-site cardiology consultations possible. History taking and physical examinations are conducted by video and electronic stethoscope. Laboratory results and echocardiograms are transmitted by document camera and VCR. The technology is being tested in both stable outpatient and emergency situations. Various telecommunications bandwidths and encoding systems are being evaluated, including satellite and terrestrial-based asynchronous transfer-mode circuits. Patient satisfaction and cost-effectiveness are also being assessed. Bandwidths from as low as 384 kbps using H.320 encoders to 40 Mbps using digital transport of NTSC video signals have been evaluated. Although lower bandwidths are sufficient for sending echocardiographic and electrocardiogram data, bandwidths with transport speeds of 4 to 6 Mbps appear necessary to capture the nuances of the cardiac physical examination. A preliminary satisfaction survey of 19 patients noted that all felt that they could communicate effectively with the cardiologist by video, and each had confidence in the advice offered. None reported that he or she would rather have traveled to the doctor in person. Initial and projected examination of the costs suggested that telehealth will effectively reduce overall health care spending while decreasing travel expenses for rural patients. Telehealth technology is sufficiently sophisticated to allow off-site cardiology assessments. Preliminary results suggest there is a sound business case for the implementation of telehealth technology to meet the needs of remote residents in northern Ontario. Working closely with government and industry, we will develop a marketing and commercialization plan to support the use of this technology throughout Ontario and expand application to patient education and continuing medical education.
A small, cheap, and portable reconnaissance robot
NASA Astrophysics Data System (ADS)
Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey
2005-05-01
While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.
Coordinated traffic incident management using the I-Net embedded sensor architecture
NASA Astrophysics Data System (ADS)
Dudziak, Martin J.
1999-01-01
The I-Net intelligent embedded sensor architecture enables the reconfigurable construction of wide-area remote sensing and data collection networks employing diverse processing and data acquisition modules communicating over thin- server/thin-client protocols. Adaptive initially for operation using mobile remotely-piloted vehicle platforms such as small helicopter robots such as the Hornet and Ascend-I, the I-Net architecture lends itself to a critical problem in the management of both spontaneous and planned traffic congestion and rerouting over major interstate thoroughfares such as the I-95 Corridor. Pre-programmed flight plans and ad hoc operator-assisted navigation of the lightweight helicopter, using an auto-pilot and gyroscopic stabilization augmentation units, allows daytime or nighttime over-the-horizon flights of the unit to collect and transmit real-time video imagery that may be stored or transmitted to other locations. With on-board GPS and ground-based pattern recognition capabilities to augment the standard video collection process, this approach enables traffic management and emergency response teams to plan and assist real-time in the adjustment of traffic flows in high- density or congested areas or during dangerous road conditions such as during ice, snow, and hurricane storms. The I-Net architecture allows for integration of land-based and roadside sensors within a comprehensive automated traffic management system with communications to and form an airborne or other platform to devices in the network other than human-operated desktop computers, thereby allowing more rapid assimilation and response for critical data. Experiments have been conducted using several modified platforms and standard video and still photographic equipment. Current research and development is focused upon modification of the modular instrumentation units in order to accommodate faster loading and reloading of equipment onto the RPV, extension of the I-Net architecture to enable RPV-to-RPV signaling and control, and refinement of safety and emergency mechanisms to handle RPV mechanical failure during flight.
System and method for image registration of multiple video streams
Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton
2018-02-06
Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.
NASA Astrophysics Data System (ADS)
Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang
2007-12-01
In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.
Integrated instrumentation & computation environment for GRACE
NASA Astrophysics Data System (ADS)
Dhekne, P. S.
2002-03-01
The project GRACE (Gamma Ray Astrophysics with Coordinated Experiments) aims at setting up a state of the art Gamma Ray Observatory at Mt. Abu, Rajasthan for undertaking comprehensive scientific exploration over a wide spectral window (10's keV - 100's TeV) from a single location through 4 coordinated experiments. The cumulative data collection rate of all the telescopes is expected to be about 1 GB/hr, necessitating innovations in the data management environment. As real-time data acquisition and control as well as off-line data processing, analysis and visualization environment of these systems is based on the us cutting edge and affordable technologies in the field of computers, communications and Internet. We propose to provide a single, unified environment by seamless integration of instrumentation and computations by taking advantage of the recent advancements in Web based technologies. This new environment will allow researchers better acces to facilities, improve resource utilization and enhance collaborations by having identical environments for online as well as offline usage of this facility from any location. We present here a proposed implementation strategy for a platform independent web-based system that supplements automated functions with video-guided interactive and collaborative remote viewing, remote control through virtual instrumentation console, remote acquisition of telescope data, data analysis, data visualization and active imaging system. This end-to-end web-based solution will enhance collaboration among researchers at the national and international level for undertaking scientific studies, using the telescope systems of the GRACE project.
QWIP technology for both military and civilian applications
NASA Astrophysics Data System (ADS)
Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.
2001-10-01
Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.
Video-based eye tracking for neuropsychiatric assessment.
Adhikari, Sam; Stark, David E
2017-01-01
This paper presents a video-based eye-tracking method, ideally deployed via a mobile device or laptop-based webcam, as a tool for measuring brain function. Eye movements and pupillary motility are tightly regulated by brain circuits, are subtly perturbed by many disease states, and are measurable using video-based methods. Quantitative measurement of eye movement by readily available webcams may enable early detection and diagnosis, as well as remote/serial monitoring, of neurological and neuropsychiatric disorders. We successfully extracted computational and semantic features for 14 testing sessions, comprising 42 individual video blocks and approximately 17,000 image frames generated across several days of testing. Here, we demonstrate the feasibility of collecting video-based eye-tracking data from a standard webcam in order to assess psychomotor function. Furthermore, we were able to demonstrate through systematic analysis of this data set that eye-tracking features (in particular, radial and tangential variance on a circular visual-tracking paradigm) predict performance on well-validated psychomotor tests. © 2017 New York Academy of Sciences.
Color infrared video mapping of upland and wetland communities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackey, H.E. Jr.; Jensen, J.R.; Hodgson, M.E.
1987-01-01
Color infrared images were obtained using a video remote sensing system at 3000 and 5000 feet over a variety of terrestrial and wetland sites on the Savannah River Plant near Aiken, SC. The terrestrial sites ranged from secondary successional old field areas to even aged pine stands treated with varying levels of sewage sludge. The wetland sites ranged from marsh and macrophyte areas to mature cypress-tupelo swamp forests. The video data were collected in three spectral channels, 0.5-0.6 ..mu..m, 0.6-0.7 ..mu..m, and 0.7-1.1 ..mu..m at a 12.5 mm focal length. The data were converted to digital form and processed withmore » standard techniques. Comparisons of the video images were made with aircraft multispectral scanner (MSS) data collected previously from the same sites. The analyses of the video data indicated that this technique may present a low cost alternative for evaluation of vegetation and landcover types for environmental monitoring and assessment.« less
Tele-Assessment of the Berg Balance Scale: Effects of Transmission Characteristics.
Venkataraman, Kavita; Morgan, Michelle; Amis, Kristopher A; Landerman, Lawrence R; Koh, Gerald C; Caves, Kevin; Hoenig, Helen
2017-04-01
To compare Berg Balance Scale (BBS) rating using videos with differing transmission characteristics with direct in-person rating. Repeated-measures study for the assessment of the BBS in 8 configurations: in person, high-definition video with slow motion review, standard-definition videos with varying bandwidths and frame rates (768 kilobytes per second [kbps] videos at 8, 15, and 30 frames per second [fps], 30 fps videos at 128, 384, and 768 kbps). Medical center. Patients with limitations (N=45) in ≥1 of 3 specific aspects of motor function: fine motor coordination, gross motor coordination, and gait and balance. Not applicable. Ability to rate the BBS in person and using videos with differing bandwidths and frame rates in frontal and lateral views. Compared with in-person rating (7%), 18% (P=.29) of high-definition videos and 37% (P=.03) of standard-definition videos could not be rated. Interrater reliability for the high-definition videos was .96 (95% confidence interval, .94-.97). Rating failure proportions increased from 20% in videos with the highest bandwidth to 60% (P<.001) in videos with the lowest bandwidth, with no significant differences in proportions across frame rate categories. Both frontal and lateral views were critical for successful rating using videos, with 60% to 70% (P<.001) of videos unable to be rated on a single view. Although there is some loss of information when using videos to rate the BBS compared to in-person ratings, it is feasible to reliably rate the BBS remotely in standard clinical spaces. However, optimal video rating requires frontal and lateral views for each assessment, high-definition video with high bandwidth, and the ability to carry out slow motion review. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
SIRSALE: integrated video database management tools
NASA Astrophysics Data System (ADS)
Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.
2002-07-01
Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.
SRNL Tagging and Tracking Video
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
SRNL generates a next generation satellite base tracking system. The tagging and tracking system can work in remote wilderness areas, inside buildings, underground and other areas not well served by traditional GPS. It’s a perfect response to customer needs and market demand.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Two-dimensional thermal video analysis of offshore bird and bat flight
Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.
2015-09-11
Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less
Two-dimensional thermal video analysis of offshore bird and bat flight
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzner, Shari; Cullinan, Valerie I.; Duberstein, Corey A.
Thermal infrared video can provide essential information about bird and bat presence and activity for risk assessment studies, but the analysis of recorded video can be time-consuming and may not extract all of the available information. Automated processing makes continuous monitoring over extended periods of time feasible, and maximizes the information provided by video. This is especially important for collecting data in remote locations that are difficult for human observers to access, such as proposed offshore wind turbine sites. We present guidelines for selecting an appropriate thermal camera based on environmental conditions and the physical characteristics of the target animals.more » We developed new video image processing algorithms that automate the extraction of bird and bat flight tracks from thermal video, and that characterize the extracted tracks to support animal identification and behavior inference. The algorithms use a video peak store process followed by background masking and perceptual grouping to extract flight tracks. The extracted tracks are automatically quantified in terms that could then be used to infer animal type and possibly behavior. The developed automated processing generates results that are reproducible and verifiable, and reduces the total amount of video data that must be retained and reviewed by human experts. Finally, we suggest models for interpreting thermal imaging information.« less
Cost effective Internet access and video conferencing for a community cancer network.
London, J. W.; Morton, D. E.; Marinucci, D.; Catalano, R.; Comis, R. L.
1995-01-01
Utilizing the ubiquitous personal computer as a platform, and Integrated Services Digital Network (ISDN) communications, cost effective medical information access and consultation can be provided for physicians at geographically remote sites. Two modes of access are provided: information retrieval via the Internet, and medical consultation video conferencing. Internet access provides general medical information such as current treatment options, literature citations, and active clinical trials. During video consultations, radiographic and pathology images, and medical text reports (e.g., history and physical, pathology, radiology, clinical laboratory reports), may be viewed and simultaneously annotated by either video conference participant. Both information access modes have been employed by physicians at community hospitals which are members of the Jefferson Cancer Network, and oncologists at Thomas Jefferson University Hospital. This project has demonstrated the potential cost effectiveness and benefits of this technology. Images Figure 1 Figure 2 Figure 3 PMID:8563397
A highly sensitive underwater video system for use in turbid aquaculture ponds.
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C
2016-08-24
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.
A highly sensitive underwater video system for use in turbid aquaculture ponds
NASA Astrophysics Data System (ADS)
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.
2016-08-01
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.
A highly sensitive underwater video system for use in turbid aquaculture ponds
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.
2016-01-01
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health. PMID:27554201
PsychVACS: a system for asynchronous telepsychiatry.
Odor, Alberto; Yellowlees, Peter; Hilty, Donald; Parish, Michelle Burke; Nafiz, Najia; Iosif, Ana-Maria
2011-05-01
To describe the technical development of an asynchronous telepsychiatry application, the Psychiatric Video Archiving and Communication System. A client-server application was developed in Visual Basic.Net with Microsoft(®) SQL database as the backend. It includes the capability of storing video-recorded psychiatric interviews and manages the workflow of the system with automated messaging. Psychiatric Video Archiving and Communication System has been used to conduct the first ever series of asynchronous telepsychiatry consultations worldwide. A review of the software application and the process as part of this project has led to a number of improvements that are being implemented in the next version, which is being written in Java. This is the first description of the use of video recorded data in an asynchronous telemedicine application. Primary care providers and consulting psychiatrists have found it easy to work with and a valuable resource to increase the availability of psychiatric consultation in remote rural locations.
ATLAS Live: Collaborative Information Streams
NASA Astrophysics Data System (ADS)
Goldfarb, Steven; ATLAS Collaboration
2011-12-01
I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.
Rapid Characterization of Shorelines using a Georeferenced Video Mapping System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Michael G.; Judd, Chaeli; Marcoe, K.
Increased understanding of shoreline conditions is needed, yet current approaches are limited in ability to characterize remote areas or document features at a finer resolution. Documentation using video mapping may provide a rapid and repeatable method for assessing the current state of the environment and determining changes to the shoreline over time. In this study, we compare two studies using boat-based, georeferenced video mapping in coastal Washington and the Columbia River Estuary to map and characterize coastal stressors and functional data. In both areas, mapping multiple features along the shoreline required approximation of the coastline. However, characterization of vertically orientedmore » features such as shoreline armoring and small features such as pilings and large woody debris was possible. In addition, end users noted that geovideo provides a permanent record to allow a user to examine recorded video anywhere along a transect or at discrete points.« less
Virtual displays for 360-degree video
NASA Astrophysics Data System (ADS)
Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.
2012-03-01
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.
Wireless Augmented Reality Communication System
NASA Technical Reports Server (NTRS)
Agan, Martin (Inventor); Devereaux, Ann (Inventor); Jedrey, Thomas (Inventor)
2015-01-01
A portable unit is for video communication to select a user name in a user name network. A transceiver wirelessly accesses a communication network through a wireless connection to a general purpose node coupled to the communication network. A user interface can receive user input to log on to a user name network through the communication network. The user name network has a plurality of user names, at least one of the plurality of user names is associated with a remote portable unit, logged on to the user name network and available for video communication.
Wireless Augmented Reality Communication System
NASA Technical Reports Server (NTRS)
Jedrey, Thomas (Inventor); Agan, Martin (Inventor); Devereaux, Ann (Inventor)
2017-01-01
A portable unit is for video communication to select a user name in a user name network. A transceiver wirelessly accesses a communication network through a wireless connection to a general purpose node coupled to the communication network. A user interface can receive user input to log on to a user name network through the communication network. The user name network has a plurality of user names, at least one of the plurality of user names is associated with a remote portable unit, logged on to the user name network and available for video communication.
Videotex and Education: A Review of British Developments.
ERIC Educational Resources Information Center
Real, Michael R.
Defining videotex, viewdata, teletext, and their cognates as systems that transmit computerized pages of information for remote display (on a television screen, variously integrating computers, and video, broadcasting, telephone, typewriter, and related technologies), this report explores educational and related applications of videotex…
Eye Can See for Miles and Miles.
ERIC Educational Resources Information Center
School Planning & Management, 2002
2002-01-01
Describes how a New Hampshire school system eliminated internal school vandalism and bomb threats, and reduced the number of false alarms, by using video security software (WebEyeAlert security solution) that is accessible via a variety of methods from remote locations. (Author/EV)
SRNL Tagging and Tracking Video
None
2018-01-16
SRNL generates a next generation satellite base tracking system. The tagging and tracking system can work in remote wilderness areas, inside buildings, underground and other areas not well served by traditional GPS. Itâs a perfect response to customer needs and market demand.
Development of a short course in transportation planning for electronic delivery to DOTD.
DOT National Transportation Integrated Search
2000-12-01
This report describes the preparation, delivery, and evaluation of a short course developed for delivery through compressed video to remote sites. The course covered basics of travel demand models and involved approximately 28 hours of classroom cont...
NASA Technical Reports Server (NTRS)
Robbins, Woodrow E. (Editor); Fisher, Scott S. (Editor)
1989-01-01
Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems.
DAVE: A plug and play model for distributed multimedia application development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mines, R.F.; Friesen, J.A.; Yang, C.L.
1994-07-01
This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as videomore » conferencing, media archival, remote process control, and distance learning.« less
Overview of the Telescience Testbed Program
NASA Technical Reports Server (NTRS)
Rasmussen, Daryl N.; Mian, Arshad; Leiner, Barry M.
1991-01-01
The NASA's Telescience Testbed Program (TTP) conducted by the Ames Research Center is described with particular attention to the objectives, the approach used to achieve these objectives, and the expected benefits of the program. The goal of the TTP is to gain operational experience for the Space Station Freedom and the Earth Observing System programs, using ground testbeds, and to define the information and communication systems requirements for the development and operation of these programs. The results of TTP are expected to include the requirements for the remote coaching, command and control, monitoring and maintenance, payload design, and operations management. In addition, requirements for technologies such as workstations, software, video, automation, data management, and networking will be defined.
Color Image Processing and Object Tracking System
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.
1996-01-01
This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.
Classical Mechanics Experiments using Wiimotes
NASA Astrophysics Data System (ADS)
Lopez, Alexander; Ochoa, Romulo
2010-02-01
The Wii, a video game console, is a very popular device. Although computationally it is not a powerful machine by today's standards, to a physics educator the controllers are its most important components. The Wiimote (or remote) controller contains a three-axis accelerometer, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, such as GlovePie, any PC or Laptop with Bluetooth capability can detect the information sent out by the Wiimote. We present experiments that use two or three Wiimotes simultaneously to measure the variable accelerations in two mass systems interacting via springs. Normal modes are determined from the data obtained. Masses and spring constants are varied to analyze their impact on the accelerations of the systems. We present the results of our experiments and compare them with those predicted using Lagrangian mechanics. )
Mobile tele-echography: user interface design.
Cañero, Cristina; Thomos, Nikolaos; Triantafyllidis, George A; Litos, George C; Strintzis, Michael Gerassimos
2005-03-01
Ultrasound imaging allows the evaluation of the degree of emergency of a patient. However, in some instances, a well-trained sonographer is unavailable to perform such echography. To cope with this issue, the Mobile Tele-Echography Using an Ultralight Robot (OTELO) project aims to develop a fully integrated end-to-end mobile tele-echography system using an ultralight remote-controlled robot for population groups that are not served locally by medical experts. This paper focuses on the user interface of the OTELO system, consisting of the following parts: an ultrasound video transmission system providing real-time images of the scanned area, an audio/video conference to communicate with the paramedical assistant and with the patient, and a virtual-reality environment, providing visual and haptic feedback to the expert, while capturing the expert's hand movements. These movements are reproduced by the robot at the patient site while holding the ultrasound probe against the patient skin. In addition, the user interface includes an image processing facility for enhancing the received images and the possibility to include them into a database.
eComLab: remote laboratory platform
NASA Astrophysics Data System (ADS)
Pontual, Murillo; Melkonyan, Arsen; Gampe, Andreas; Huang, Grant; Akopian, David
2011-06-01
Hands-on experiments with electronic devices have been recognized as an important element in the field of engineering to help students get familiar with theoretical concepts and practical tasks. The continuing increase the student number, costly laboratory equipment, and laboratory maintenance slow down the physical lab efficiency. As information technology continues to evolve, the Internet has become a common media in modern education. Internetbased remote laboratory can solve a lot of restrictions, providing hands-on training as they can be flexible in time and the same equipment can be shared between different students. This article describes an on-going remote hands-on experimental radio modulation, network and mobile applications lab project "eComLab". Its main component is a remote laboratory infrastructure and server management system featuring various online media familiar with modern students, such as chat rooms and video streaming.
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
NASA Astrophysics Data System (ADS)
Choe, Giseok; Nang, Jongho
The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 3 × 2 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.
A streaming-based solution for remote visualization of 3D graphics on mobile devices.
Lamberti, Fabrizio; Sanna, Andrea
2007-01-01
Mobile devices such as Personal Digital Assistants, Tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications are now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, Personal Digital Assistants (PDAs), and Tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed.
A Comparative Survey of Methods for Remote Heart Rate Detection From Frontal Face Videos
Wang, Chen; Pun, Thierry; Chanel, Guillaume
2018-01-01
Remotely measuring physiological activity can provide substantial benefits for both the medical and the affective computing applications. Recent research has proposed different methodologies for the unobtrusive detection of heart rate (HR) using human face recordings. These methods are based on subtle color changes or motions of the face due to cardiovascular activities, which are invisible to human eyes but can be captured by digital cameras. Several approaches have been proposed such as signal processing and machine learning. However, these methods are compared with different datasets, and there is consequently no consensus on method performance. In this article, we describe and evaluate several methods defined in literature, from 2008 until present day, for the remote detection of HR using human face recordings. The general HR processing pipeline is divided into three stages: face video processing, face blood volume pulse (BVP) signal extraction, and HR computation. Approaches presented in the paper are classified and grouped according to each stage. At each stage, algorithms are analyzed and compared based on their performance using the public database MAHNOB-HCI. Results found in this article are limited on MAHNOB-HCI dataset. Results show that extracted face skin area contains more BVP information. Blind source separation and peak detection methods are more robust with head motions for estimating HR. PMID:29765940
Telehealth: Implications for Social Work.
ERIC Educational Resources Information Center
McCarty, Dawn; Clancy, Catherine
2002-01-01
The use of modern information technology to deliver health services to remote locations presents both opportunities and problems for social workers. This article examines how communication technology such as e-mail and video conferencing affect social work practice. Issues are raised about the ethical, legal, and client relationship…
The Videoconferencing Classroom: What Do Students Think?
ERIC Educational Resources Information Center
Doggett, A. Mark
2007-01-01
The advantages of video conferencing in educational institutions are well documented. Scholarly literature has indicated that videoconferencing technology reduces time and costs between remote locations, fill gaps in teaching services, increases training productivity, enables meetings that would not be possible due to prohibitive travel costs, and…
Virtual Reality Calibration for Telerobotic Servicing
NASA Technical Reports Server (NTRS)
Kim, W.
1994-01-01
A virtual reality calibration technique of matching a virtual environment of simulated graphics models in 3-D geometry and perspective with actual camera views of the remote site task environment has been developed to enable high-fidelity preview/predictive displays with calibrated graphics overlay on live video.
Autonomous stair-climbing with miniature jumping robots.
Stoeter, Sascha A; Papanikolopoulos, Nikolaos
2005-04-01
The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.
NASA Astrophysics Data System (ADS)
Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an
2017-09-01
High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.
Computer-aided video exposure monitoring.
Walsh, P T; Clark, R D; Flaherty, S; Gentry, S J
2000-01-01
A computer-aided video exposure monitoring system was used to record exposure information. The system comprised a handheld camcorder, portable video cassette recorder, radio-telemetry transmitter/receiver, and handheld or notebook computers for remote data logging, photoionization gas/vapor detectors (PIDs), and a personal aerosol monitor. The following workplaces were surveyed using the system: dry cleaning establishments--monitoring tetrachoroethylene in the air and in breath; printing works--monitoring white spirit type solvent; tire manufacturing factory--monitoring rubber fume; and a slate quarry--monitoring respirable dust and quartz. The system based on the handheld computer, in particular, simplified the data acquisition process compared with earlier systems in use by our laboratory. The equipment is more compact and easier to operate, and allows more accurate calibration of the instrument reading on the video image. Although a variety of data display formats are possible, the best format for videos intended for educational and training purposes was the review-preview chart superimposed on the video image of the work process. Recommendations for reducing exposure by engineering or by modifying work practice were possible through use of the video exposure system in the dry cleaning and tire manufacturing applications. The slate quarry work illustrated how the technique can be used to test ventilation configurations quickly to see their effect on the worker's personal exposure.
Stephenson, Rob; Metheny, Nicholas; Sharma, Akshay; Sullivan, Stephen; Riley, Erin
2017-11-28
Transgender and gender nonconforming people experience some of the highest human immunodeficiency virus (HIV) rates in the United States, and experience many structural and behavioral barriers that may limit their engagement in HIV testing, prevention, and care. Evidence suggests that transgender and gender nonconforming youth (TY) are especially vulnerable to acquiring HIV, yet there is little research on TY and few services are targeted towards HIV testing, prevention, and care for this population. Telehealth presents an opportunity to mitigate some structural barriers that TY experience in accessing HIV testing, allowing TY to engage in HIV testing and counseling in a safe and nonjudgmental space of their choosing. Project Moxie is an HIV prevention intervention that pairs the use of HIV self-testing with remote video-based counseling and support from a trained, gender-affirming counselor. This study aims to offer a more positive HIV testing and counseling experience, with the goal of improving HIV testing frequency. Project Moxie involves a pilot randomized controlled trial (RCT) of 200 TY aged 15-24 years, who are randomized on a 1:1 basis to control or intervention arms. The aim is to examine whether the addition of counseling provided via telehealth, coupled with home-based HIV testing, can create gains in routine HIV testing among TY over a six-month follow-up period. This study implements a prospective pilot RCT of 200 TY recruited online. Participants in the control arm will receive one HIV self-testing kit and will be asked to report their results via the study's website. Participants in the experimental arm will receive one HIV self-testing kit and will test with a remotely-located counselor during a prescheduled video-counseling session. Participants are assessed at baseline, and at three and six months posttesting. Project Moxie was launched in June 2017 and recruitment is ongoing. As of August 21, 2017, the study had enrolled 130 eligible participants. Combining home-based HIV testing and video-based counseling allows TY, an often stigmatized and marginalized population, to test for HIV in a safe and nonjudgmental setting of their choosing. This approach creates an opportunity to reduce the high rate of HIV among TY through engagement in care, support, and linkage to the HIV treatment cascade for those who test positive. ClinicalTrials.gov NCT03185975; https://clinicaltrials.gov/ct2/show/NCT03185975 (Archived by WebCite at http://www.webcitation.org/6vIjHJ93s). ©Rob Stephenson, Nicholas Metheny, Akshay Sharma, Stephen Sullivan, Erin Riley. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 28.11.2017.
Remote programming of cochlear implants: a telecommunications model.
McElveen, John T; Blackburn, Erin L; Green, J Douglas; McLear, Patrick W; Thimsen, Donald J; Wilson, Blake S
2010-09-01
Evaluate the effectiveness of remote programming for cochlear implants. Retrospective review of the cochlear implant performance for patients who had undergone mapping and programming of their cochlear implant via remote connection through the Internet. Postoperative Hearing in Noise Test and Consonant/Nucleus/Consonant word scores for 7 patients who had undergone remote mapping and programming of their cochlear implant were compared with the mean scores of 7 patients who had been programmed by the same audiologist over a 12-month period. Times required for remote and direct programming were also compared. The quality of the Internet connection was assessed using standardized measures. Remote programming was performed via a virtual private network with a separate software program used for video and audio linkage. All 7 patients were programmed successfully via remote connectivity. No untoward patient experiences were encountered. No statistically significant differences could be found in comparing postoperative Hearing in Noise Test and Consonant/Nucleus/Consonant word scores for patients who had undergone remote programming versus a similar group of patients who had their cochlear implant programmed directly. Remote programming did not require a significantly longer programming time for the audiologist with these 7 patients. Remote programming of a cochlear implant can be performed safely without any deterioration in the quality of the programming. This ability to remotely program cochlear implant patients gives the potential to extend cochlear implantation to underserved areas in the United States and elsewhere.
Remote Classroom Observations with Preservice Teachers
ERIC Educational Resources Information Center
Wash, Pamela D.; Bradley, Gary; Beck, Judy
2014-01-01
According to O'Brien, Aguinaga, Hines, and Hartsborne (2011), "Delivery of course content via various distance education technologies (e.g., interactive video, asynchronous and/or synchronous online delivery) is becoming an accepted and expected component of many teacher preparation programs" (p. 3). With the infusion of technology in…
ERIC Educational Resources Information Center
Hough, Roger W.
Results of a recent study of potential demand for electronic information transfer services in 1970-1990 are presented. Projections are made for new services such as electronic mail, remote library browsing, checkless society transactions, video telephone and others, as well as conventional services such as telephone, telegraph and network program…
The Role of Motivational Values in the Construction of Change Messages
ERIC Educational Resources Information Center
Cardon, Peter W.; Philadelphia, Marion
2015-01-01
We examined how 106 early-career professionals constructed video change messages involving a ban on remote working. These professionals constructed three types of statements: vision statements, direct change statements, and indirect change statements. Professionals with higher assertive-directing motivational values tended to first construct…
High-resolution streaming video integrated with UGS systems
NASA Astrophysics Data System (ADS)
Rohrer, Matthew
2010-04-01
Imagery has proven to be a valuable complement to Unattended Ground Sensor (UGS) systems. It provides ultimate verification of the nature of detected targets. However, due to the power, bandwidth, and technological limitations inherent to UGS, sacrifices have been made to the imagery portion of such systems. The result is that these systems produce lower resolution images in small quantities. Currently, a high resolution, wireless imaging system is being developed to bring megapixel, streaming video to remote locations to operate in concert with UGS. This paper will provide an overview of how using Wifi radios, new image based Digital Signal Processors (DSP) running advanced target detection algorithms, and high resolution cameras gives the user an opportunity to take high-powered video imagers to areas where power conservation is a necessity.
Performance of a scanning laser line striper in outdoor lighting
NASA Astrophysics Data System (ADS)
Mertz, Christoph
2013-05-01
For search and rescue robots and reconnaissance robots it is important to detect objects in their vicinity. We have developed a scanning laser line striper that can produce dense 3D images using active illumination. The scanner consists of a camera and a MEMS-micro mirror based projector. It can also detect the presence of optically difficult material like glass and metal. The sensor can be used for autonomous operation or it can help a human operator to better remotely control the robot. In this paper we will evaluate the performance of the scanner under outdoor illumination, i.e. from operating in the shade to operating in full sunlight. We report the range, resolution and accuracy of the sensor and its ability to reconstruct objects like grass, wooden blocks, wires, metal objects, electronic devices like cell phones, blank RPG, and other inert explosive devices. Furthermore we evaluate its ability to detect the presence of glass and polished metal objects. Lastly we report on a user study that shows a significant improvement in a grasping task. The user is tasked with grasping a wire with the remotely controlled hand of a robot. We compare the time it takes to complete the task using the 3D scanner with using a traditional video camera.
MED31/437: A Web-based Diabetes Management System: DiabNet
Zhao, N; Roudsari, A; Carson, E
1999-01-01
Introduction A web-based system (DiabNet) was developed to provide instant access to the Electronic Diabetes Records (EDR) for end-users, and real-time information for healthcare professionals to facilitate their decision-making. It integrates portable glucometer, handheld computer, mobile phone and Internet access as a combined telecommunication and mobile computing solution for diabetes management. Methods: Active Server Pages (ASP) embedded with advanced ActiveX controls and VBScript were developed to allow remote data upload, retrieval and interpretation. Some advisory and Internet-based learning features, together with a video teleconferencing component make DiabNet web site an informative platform for Web-consultation. Results The evaluation of the system is being implemented among several UK Internet diabetes discussion groups and the Diabetes Day Centre at the Guy's & St. Thomas' Hospital. Many positive feedback are received from the web site demonstrating DiabNet is an advanced web-based diabetes management system which can help patients to keep closer control of self-monitoring blood glucose remotely, and is an integrated diabetes information resource that offers telemedicine knowledge in diabetes management. Discussion In summary, DiabNet introduces an innovative online diabetes management concept, such as online appointment and consultation, to enable users to access diabetes management information without time and location limitation and security concerns.
NASA Astrophysics Data System (ADS)
Linder, C. A.; Wilbert, M.; Holmes, R. M.
2010-12-01
Multimedia video presentations, which integrate still photographs with video clips, audio interviews, ambient sounds, and music, are an effective and engaging way to tell science stories. In July 2009, Linder joined professors and undergraduates on an expedition to the Kolyma River in northeastern Siberia. This IPY science project, called The Polaris Project (http://www.thepolarisproject.org), is an undergraduate research experience where students and faculty work together to increase our understanding of climate change impacts, including thawing permafrost, in this remote corner of the world. During the summer field season, Linder conducted dozens of interviews, captured over 20,000 still photographs and hours of ambient audio and video clips. Following the 2009 expedition, Linder blended this massive archive of visual and audio information into a 10-minute overview video and five student vignettes. In 2010, Linder again traveled to Siberia as part of the Polaris Project, this time mentoring an environmental journalism student who will lead the production of a video about the 2010 field season. Using examples from the Polaris productions, we will present tips, tools, and techniques for creating compelling multimedia science stories.
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
Remote experimental site concept development
NASA Astrophysics Data System (ADS)
Casper, Thomas A.; Meyer, William; Butner, David
1995-01-01
Scientific research is now often conducted on large and expensive experiments that utilize collaborative efforts on a national or international scale to explore physics and engineering issues. This is particularly true for the current US magnetic fusion energy program where collaboration on existing facilities has increased in importance and will form the basis for future efforts. As fusion energy research approaches reactor conditions, the trend is towards fewer large and expensive experimental facilities, leaving many major institutions without local experiments. Since the expertise of various groups is a valuable resource, it is important to integrate these teams into an overall scientific program. To sustain continued involvement in experiments, scientists are now often required to travel frequently, or to move their families, to the new large facilities. This problem is common to many other different fields of scientific research. The next-generation tokamaks, such as the Tokamak Physics Experiment (TPX) or the International Thermonuclear Experimental Reactor (ITER), will operate in steady-state or long pulse mode and produce fluxes of fusion reaction products sufficient to activate the surrounding structures. As a direct consequence, remote operation requiring robotics and video monitoring will become necessary, with only brief and limited access to the vessel area allowed. Even the on-site control room, data acquisition facilities, and work areas will be remotely located from the experiment, isolated by large biological barriers, and connected with fiber-optics. Current planning for the ITER experiment includes a network of control room facilities to be located in the countries of the four major international partners; USA, Russian Federation, Japan, and the European Community.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-08-30
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.
Experiences in teleoperation of land vehicles
NASA Technical Reports Server (NTRS)
Mcgovern, Douglas E.
1989-01-01
Teleoperation of land vehicles allows the removal of the operator from the vehicle to a remote location. This can greatly increase operator safety and comfort in applications such as security patrol or military combat. The cost includes system complexity and reduced system performance. All feedback on vehicle performance and on environmental conditions must pass through sensors, a communications channel, and displays. In particular, this requires vision to be transmitted by close-circuit television with a consequent degradation of information content. Vehicular teleoperation, as a result, places severe demands on the operator. Teleoperated land vehicles have been built and tested by many organizations, including Sandia National Laboratories (SNL). The SNL fleet presently includes eight vehicles of varying capability. These vehicles have been operated using different types of controls, displays, and visual systems. Experimentation studying the effects of vision system characteristics on off-road, remote driving was performed for conditions of fixed camera versus steering-coupled camera and of color versus black and white video display. Additionally, much experience was gained through system demonstrations and hardware development trials. The preliminary experimental findings and the results of the accumulated operational experience are discussed.
A New Remote Health-Care System Based on Moving Robot Intended for the Elderly at Home
Zhou, Bing; Wu, Kaige; Wang, Jing; Chen, Gang; Ji, Bo; Liu, Siying
2018-01-01
Nowadays, due to the growing need for remote care and the constantly increasing popularity of mobile devices, a large amount of mobile applications for remote care support has been developed. Although mobile phones are very suitable for young people, there are still many problems related to remote health care of the elderly. Due to hearing loss or limited movements, it is difficult for the elderly to contact their families or doctors via real-time video call. In this paper, we introduce a new remote health-care system based on moving robots intended for the elderly at home. Since the proposed system is an online system, the elderly can contact their families and doctors quickly anytime and anywhere. Besides call, our system involves the accurate indoor object detection algorithms and automatic health data collection, which are not included in existing remote care systems. Therefore, the proposed system solves some challenging problems related to the elderly care. The experiment has shown that the proposed care system achieves excellent performance and provides good user experience. PMID:29599949
Automated Visual Event Detection, Tracking, and Data Management System for Cabled- Observatory Video
NASA Astrophysics Data System (ADS)
Edgington, D. R.; Cline, D. E.; Schlining, B.; Raymond, E.
2008-12-01
Ocean observatories and underwater video surveys have the potential to unlock important discoveries with new and existing camera systems. Yet the burden of video management and analysis often requires reducing the amount of video recorded through time-lapse video or similar methods. It's unknown how many digitized video data sets exist in the oceanographic community, but we suspect that many remain under analyzed due to lack of good tools or human resources to analyze the video. To help address this problem, the Automated Visual Event Detection (AVED) software and The Video Annotation and Reference System (VARS) have been under development at MBARI. For detecting interesting events in the video, the AVED software has been developed over the last 5 years. AVED is based on a neuromorphic-selective attention algorithm, modeled on the human vision system. Frames are decomposed into specific feature maps that are combined into a unique saliency map. This saliency map is then scanned to determine the most salient locations. The candidate salient locations are then segmented from the scene using algorithms suitable for the low, non-uniform light and marine snow typical of deep underwater video. For managing the AVED descriptions of the video, the VARS system provides an interface and database for describing, viewing, and cataloging the video. VARS was developed by the MBARI for annotating deep-sea video data and is currently being used to describe over 3000 dives by our remotely operated vehicles (ROV), making it well suited to this deepwater observatory application with only a few modifications. To meet the compute and data intensive job of video processing, a distributed heterogeneous network of computers is managed using the Condor workload management system. This system manages data storage, video transcoding, and AVED processing. Looking to the future, we see high-speed networks and Grid technology as an important element in addressing the problem of processing and accessing large video data sets.
Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M
2011-01-01
The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.
Scheduling for anesthesia at geographic locations remote from the operating room.
Dexter, Franklin; Wachtel, Ruth E
2014-08-01
Providing general anesthesia at locations away from the operating room, called remote locations, poses many medical and scheduling challenges. This review discusses how to schedule procedures at remote locations to maximize anesthesia productivity (see Video, Supplemental Digital Content 1). Anesthesia labour productivity can be maximized by assigning one or more 8-h or 10-h periods of allocated time every 2 weeks dedicated specifically to each remote specialty that has enough cases to fill those periods. Remote specialties can then schedule their cases themselves into their own allocated time. Periods of allocated time (called open, unblocked or first come first served time) can be used by remote locations that do not have their own allocated time. Unless cases are scheduled sequentially into allocated time, there will be substantial extra underutilized time (time during which procedures are not being performed and personnel sit idle even though staffing has been planned) and a concomitant reduction in percent productivity. Allocated time should be calculated on the basis of usage. Remote locations with sufficient hours of cases should be allocated time reserved especially for them in which to schedule their cases, with a maximum waiting time of 2 weeks, to achieve an average wait of 1 week.
Enhancing Online Education Using Collaboration Solutions
ERIC Educational Resources Information Center
Ge, Shuzhi Sam; Tok, Meng Yong
2003-01-01
With the advances in Internet technologies, online education is fast gaining ground as an extension to traditional education. Webcast allows lectures conducted on campus to be viewed by students located at remote sites by streaming the audio and video content over Internet Protocol (IP) networks. However when used alone, webcast does not provide…
Artists concept of the salvage operations offshore of KSC after STS 51-L
1986-04-01
S86-30088 (March 1986) --- Salvage operations offshore of Kennedy Space Center, are depicted in this artist’s concept showing a grapple and recovery fixture (left) being directed through the use of a remote video system suspended from the recovery ship. Photo credit: NASA
ERIC Educational Resources Information Center
Greene, Carolyn J.; Morland, Leslie A.; Macdonald, Alexandra; Frueh, B. Christopher; Grubbs, Kathleen M.; Rosen, Craig S.
2010-01-01
Objective: Video teleconferencing (VTC) is used for mental health treatment delivery to geographically remote, underserved populations. However, few studies have examined how VTC affects individual or group psychotherapy processes. This study compares process variables such as therapeutic alliance and attrition among participants receiving anger…
Virtual Labs vs. Remote Labs: Between Myth & Reality.
ERIC Educational Resources Information Center
Alhalabi, Bassem; Hamza, M. Khalid; Hsu, Sam; Romance, Nancy
Many United States institutions of higher education have established Web-based educational environments that provide higher education curricula via the Internet and diverse modalities. Success has been limited primarily to virtual classrooms (real audio/video transmission) and/or test taking (online form filing). An extensive survey was carried…
Man-Machine Communication Through a Teletypewriter.
ERIC Educational Resources Information Center
Rubinoff, Morris
A ten-year research study designed a mechanized information system in the information processing field. Special attention was paid to implementation criteria entering into on-line retrieval through man-machine dialog from a remote typewriter or video terminal and four major areas were investigated: search strategies, machine stored indexer aids,…
Teletoxicology: Patient Assessment Using Wearable Audiovisual Streaming Technology.
Skolnik, Aaron B; Chai, Peter R; Dameff, Christian; Gerkin, Richard; Monas, Jessica; Padilla-Jones, Angela; Curry, Steven
2016-12-01
Audiovisual streaming technologies allow detailed remote patient assessment and have been suggested to change management and enhance triage. The advent of wearable, head-mounted devices (HMDs) permits advanced teletoxicology at a relatively low cost. A previously published pilot study supports the feasibility of using the HMD Google Glass® (Google Inc.; Mountain View, CA) for teletoxicology consultation. This study examines the reliability, accuracy, and precision of the poisoned patient assessment when performed remotely via Google Glass®. A prospective observational cohort study was performed on 50 patients admitted to a tertiary care center inpatient toxicology service. Toxicology fellows wore Google Glass® and transmitted secure, real-time video and audio of the initial physical examination to a remote investigator not involved in the subject's care. High-resolution still photos of electrocardiograms (ECGs) were transmitted to the remote investigator. On-site and remote investigators recorded physical examination findings and ECG interpretation. Both investigators completed a brief survey about the acceptability and reliability of the streaming technology for each encounter. Kappa scores and simple agreement were calculated for each examination finding and electrocardiogram parameter. Reliability scores and reliability difference were calculated and compared for each encounter. Data were available for analysis of 17 categories of examination and ECG findings. Simple agreement between on-site and remote investigators ranged from 68 to 100 % (median = 94 %, IQR = 10.5). Kappa scores could be calculated for 11/17 parameters and demonstrated slight to fair agreement for two parameters and moderate to almost perfect agreement for nine parameters (median = 0.653; substantial agreement). The lowest Kappa scores were for pupil size and response to light. On a 100-mm visual analog scale (VAS), mean comfort level was 93 and mean reliability rating was 89 for on-site investigators. For remote users, the mean comfort and reliability ratings were 99 and 86, respectively. The average difference in reliability scores between on-site and remote investigators was 2.6, with the difference increasing as reliability scores decreased. Remote evaluation of poisoned patients via Google Glass® is possible with a high degree of agreement on examination findings and ECG interpretation. Evaluation of pupil size and response to light is limited, likely by the quality of streaming video. Users of Google Glass® for teletoxicology reported high levels of comfort with the technology and found it reliable, though as reported reliability decreased, remote users were most affected. Further study should compare patient-centered outcomes when using HMDs for consultation to those resulting from telephone consultation.
Video Guidance Sensors Using Remotely Activated Targets
NASA Technical Reports Server (NTRS)
Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.
2004-01-01
Four updated video guidance sensor (VGS) systems have been proposed. As described in a previous NASA Tech Briefs article, a VGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. The VGS provides relative position and attitude (6-DOF) information between the VGS and its target. In the original intended application, the two vehicles would be spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In the first two of the four VGS systems as now proposed, the tracked vehicle would include active targets that would light up on command from the tracking vehicle, and a video camera on the tracking vehicle would be synchronized with, and would acquire images of, the active targets. The video camera would also acquire background images during the periods between target illuminations. The images would be digitized and the background images would be subtracted from the illuminated-target images. Then the position and orientation of the tracked vehicle relative to the tracking vehicle would be computed from the known geometric relationships among the positions of the targets in the image, the positions of the targets relative to each other and to the rest of the tracked vehicle, and the position and orientation of the video camera relative to the rest of the tracking vehicle. The major difference between the first two proposed systems and prior active-target VGS systems lies in the techniques for synchronizing the flashing of the active targets with the digitization and processing of image data. In the prior active-target VGS systems, synchronization was effected, variously, by use of either a wire connection or the Global Positioning System (GPS). In three of the proposed VGS systems, the synchronizing signal would be generated on, and transmitted from, the tracking vehicle. In the first proposed VGS system, the tracking vehicle would transmit a pulse of light. Upon reception of the pulse, circuitry on the tracked vehicle would activate the target lights. During the pulse, the target image acquired by the camera would be digitized. When the pulse was turned off, the target lights would be turned off and the background video image would be digitized. The second proposed system would function similarly to the first proposed system, except that the transmitted synchronizing signal would be a radio pulse instead of a light pulse. In this system, the signal receptor would be a rectifying antenna. If the signal contained sufficient power, the output of the rectifying antenna could be used to activate the target lights, making it unnecessary to include a battery or other power supply for the targets on the tracked vehicle.
NASA Technical Reports Server (NTRS)
Cardullo, Frank M.; Lewis, Harold W., III; Panfilov, Peter B.
2007-01-01
An extremely innovative approach has been presented, which is to have the surgeon operate through a simulator running in real-time enhanced with an intelligent controller component to enhance the safety and efficiency of a remotely conducted operation. The use of a simulator enables the surgeon to operate in a virtual environment free from the impediments of telecommunication delay. The simulator functions as a predictor and periodically the simulator state is corrected with truth data. Three major research areas must be explored in order to ensure achieving the objectives. They are: simulator as predictor, image processing, and intelligent control. Each is equally necessary for success of the project and each of these involves a significant intelligent component in it. These are diverse, interdisciplinary areas of investigation, thereby requiring a highly coordinated effort by all the members of our team, to ensure an integrated system. The following is a brief discussion of those areas. Simulator as a predictor: The delays encountered in remote robotic surgery will be greater than any encountered in human-machine systems analysis, with the possible exception of remote operations in space. Therefore, novel compensation techniques will be developed. Included will be the development of the real-time simulator, which is at the heart of our approach. The simulator will present real-time, stereoscopic images and artificial haptic stimuli to the surgeon. Image processing: Because of the delay and the possibility of insufficient bandwidth a high level of novel image processing is necessary. This image processing will include several innovative aspects, including image interpretation, video to graphical conversion, texture extraction, geometric processing, image compression and image generation at the surgeon station. Intelligent control: Since the approach we propose is in a sense predictor based, albeit a very sophisticated predictor, a controller, which not only optimizes end effector trajectory but also avoids error, is essential. We propose to investigate two different approaches to the controller design. One approach employs an optimal controller based on modern control theory; the other one involves soft computing techniques, i.e. fuzzy logic, neural networks, genetic algorithms and hybrids of these.
Ditchburn, Jae-Llane; Marshall, Alison
2017-09-01
The Lancashire Teaching Hospitals NHS Trust in the UK has been providing renal care through video-as-a-service (VAAS) to patients since 2013, with support from the North West NHS Shared Infrastructure Service, a collaborative team that supports information and communication technology use in the UK National Health Service. Renal telemedicine offered remotely to patients on home dialysis supports renal care through the provision of a live high-quality video link directly to unsupported patients undergoing haemodialysis at home. Home haemodialysis is known to provide benefits to patients, particularly in making them more independent. The use of a telemedicine video-link in Lancashire and South Cumbria, UK, further reduces patient dependence on the professional team. The purpose of this paper is to present the perspectives of the renal care team members using the renal telemedicine service to understand the perceived benefits and issues with the service. Ten semi-structured interviews with members of the renal care team (two renal specialists, one matron, two renal nurses, one business manager, one renal technical services manager, two IT technicians and one hardware maintenance technician) were conducted. Thematic analysis was undertaken to analyse the qualitative data. A range of incremental benefits to the renal team members were reported, including more efficient use of staff time, reduced travel, peace of mind and a strong sense of job satisfaction. Healthcare staff believed that remote renal care through video was useful, encouraged concordance and could nurture confidence in patients. Key technological issues and adjustments which would improve the renal telemedicine service were also identified. The impact of renal telemedicine was positive on the renal team members. The use of telemedicine has been demonstrated to make home dialysis delivery more efficient and safe. The learning from staff feedback could inform development of services elsewhere. © 2017 European Dialysis and Transplant Nurses Association/European Renal Care Association.
Geometric database maintenance using CCTV cameras and overlay graphics
NASA Astrophysics Data System (ADS)
Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin
1988-01-01
An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.
Nekton Interaction Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-03-15
The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less
Integrated Launch Operations Applications Remote Display Developer
NASA Technical Reports Server (NTRS)
Flemming, Cedric M., II
2014-01-01
This internship provides the opportunity to support the creation and use of Firing Room Displays and Firing Room Applications that use an abstraction layer called the Application Control Language (ACL). Required training included video watching, reading assignments, face-to-face instruction and job shadowing other Firing Room software developers as they completed their daily duties. During the training period various computer and access rights needed for creating the applications were obtained. The specific ground subsystems supported are the Cryogenics Subsystems, Liquid Hydrogen (LH2) and Liquid Oxygen (LO2). The cryogenics team is given the task of finding the best way to handle these very volatile liquids that are used to fuel the Space Launch System (SLS) and the Orion flight vehicles safely.
[Self-made "electric chair" for sexually motivated child abuse of children].
Rothschild, Markus A; Vendura, Klaus; Kell, Gerald
2007-01-01
A 52-year-old man had altered a wooden folding chair by placing two electrodes and a circuit underneath the seat. Using a remote control, he was able to give electric shocks to a person sitting on the chair. He used this device on more than 50 children, video-taping their reactions for his own pleasure. There are no reports that any of the children suffered a lasting damage to their health. The construction as well as the function and the electrical parameters of the chair were examined by forensic specialists. According to their expertise, the construction was not able to cause a potentially life-threatening condition when used with healthy children. The perpetrator was convicted for bodily harm etc.
Controlling QoS in a collaborative multimedia environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfano, M.; Sigle, R.
1996-12-31
A collaborative multimedia environment allows users to work remotely on common projects by sharing applications (e.g., CAD tools, text editors, white boards) and simultaneously communicate audiovisually. Several dedicated applications (e.g., MBone tools) exist for transmitting video, audio and data between users. Due to the fact that they have been developed for the Internet which does not provide any Quality of Service (QoS) guarantee, these applications do not or only partially support specification of QoS requirements by the user. In addition, they all come with different user interfaces. In this paper we first discuss the problems that we experienced both atmore » the host and network levels when executing a multimedia application and varying its resource requirements. We then present the architectural details of a collaborative multimedia environment (CME) that we have been developing in order to help a user to set up and control a collaborative multimedia session.« less
Robots Save Soldiers' Lives Overseas (MarcBot)
NASA Technical Reports Server (NTRS)
2009-01-01
Marshall Space Flight Center mobile communications platform designs for future lunar missions led to improvements to fleets of tactical robots now being deployed by U.S. Army. The Multi-function Agile Remote Control Robot (MARCbot) helps soldiers search out and identify improvised explosive devices. NASA used the MARCbots to test its mobile communications platform, and in working with it, made the robot faster while adding capabilities -- upgrading to a digital camera, encrypting the controllers and video transmission, as well as increasing the range and adding communications abilities. They also simplified the design, providing more plug-and-play sensors and replacing some of the complex electronics with more trouble-free, low-cost components. Applied Geo Technology, a tribally-owned corporation in Choctaw, Mississippi, was given the task of manufacturing the modified robots. The company is now producing 40 units per month, 300 of which have already been deployed overseas.
Yoshino, A; Shigemura, J; Kobayashi, Y; Nomura, S; Shishikura, K; Den, R; Wakisaka, H; Kamata, S; Ashida, H
2001-09-01
We assessed the reliability of remote video psychiatric interviews conducted via the internet using narrow and broad bandwidths. Televideo psychiatric interviews conducted with 42 in-patients with chronic schizophrenia using two bandwidths (narrow, 128 kilobits/s; broad, 2 megabits/s) were assessed in terms of agreement with face-to-face interviews in a test-retest fashion. As a control, agreement was assessed between face-to-face interviews. Psychiatric symptoms were rated using the Oxford version of the Brief Psychiatric Rating Scale (BPRS), and agreement between interviews was estimated as the intraclass correlation coefficient (ICC). The ICC was significantly lower in the narrow bandwidth than in the broad bandwidth and the control for both positive symptoms score and total score. While reliability of televideo psychiatric interviews is insufficient using the present narrow-band internet infrastructure, the next generation of infrastructure (broad-band) may permit reliable diagnostic interviews.
Introductory Physics Experiments Using the Wiimote
NASA Astrophysics Data System (ADS)
Somers, William; Rooney, Frank; Ochoa, Romulo
2009-03-01
The Wii, a video game console, is a very popular device with millions of units sold worldwide over the past two years. Although computationally it is not a powerful machine, to a physics educator its most important components can be its controllers. The Wiimote (or remote) controller contains three accelerometers, an infrared detector, and Bluetooth connectivity at a relatively low price. Thanks to available open source code, any PC with Bluetooth capability can detect the information sent out by the Wiimote. We have designed several experiments for introductory physics courses that make use of the accelerometers and Bluetooth connectivity. We have adapted the Wiimote to measure the: variable acceleration in simple harmonic motion, centripetal and tangential accelerations in circular motion, and the accelerations generated when students lift weights. We present the results of our experiments and compare them with those obtained when using motion and/or force sensors.
[Present and prospects of telepathology].
Takahashi, M; Mernyei, M; Shibuya, C; Toshima, S
1999-01-01
Nearly ten years have passed since telepathology was introduced and real-time pathology consultations were conducted. Long distance consultations in pathology, cytology, computed tomography and magnetic resonance imaging, which are referred to as telemedicine, clearly enhance the level of medical care in remote hospitals where no full-time specialists are employed. To transmit intraoperative frozen section images, we developed a unique hybrid system "Hi-SPEED". The imaging view through the CCD camera is controlled by a camera controller that provides NTSC composite video output for low resolution motion pictures and high resolution digital output for final interpretation on computer display. The results of intraoperative frozen section diagnosis between the Gihoku General Hospital 410 km from SRL showed a sensitivity of 97.6% for 82 cases of breast carcinoma and a false positive rate of 1.2%. This system can be used for second opinions as well as for consultations between cytologists and cytotechnologists.
Compact Microscope Imaging System with Intelligent Controls
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.
Schittek Janda, M; Tani Botticelli, A; Mattheos, N; Nebel, D; Wagner, A; Nattestad, A; Attström, R
2005-05-01
Video-based instructions for clinical procedures have been used frequently during the preceding decades. To investigate in a randomised controlled trial the learning effectiveness of fragmented videos vs. the complete sequential video and to analyse the attitudes of the user towards video as a learning aid. An instructional video on surgical hand wash was produced. The video was available in two different forms in two separate web pages: one as a sequential video and one fragmented into eight short clips. Twenty-eight dental students in the second semester were randomised into an experimental (n = 15) and a control group (n = 13). The experimental group used the fragmented form of the video and the control group watched the complete one. The use of the videos was logged and the students were video taped whilst undertaking a test hand wash. The videos were analysed systematically and blindly by two independent clinicians. The students also performed a written test concerning learning outcome from the videos as well as they answered an attitude questionnaire. The students in the experimental group watched the video significantly longer than the control group. There were no significant differences between the groups with regard to the ratings and scores when performing the hand wash. The experimental group had significantly better results in the written test compared with those of the control group. There was no significant difference between the groups with regard to attitudes towards the use of video for learning, as measured by the Visual Analogue Scales. Most students in both groups expressed satisfaction with the use of video for learning. The students demonstrated positive attitudes and acceptable learning outcome from viewing CAL videos as a part of their pre-clinical training. Videos that are part of computer-based learning settings would ideally be presented to the students both as a segmented and as a whole video to give the students the option to choose the form of video which suits the individual student's learning style.
Web-based interactive drone control using hand gesture
NASA Astrophysics Data System (ADS)
Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng
2018-01-01
This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.
Web-based interactive drone control using hand gesture.
Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng
2018-01-01
This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.
Diffusion in coastal and harbour zones, effects of Waves,Wind and Currents
NASA Astrophysics Data System (ADS)
Diez, M.; Redondo, J. M.
2009-04-01
As there are multiple processes at different scales that produce turbulent mixing in the ocean, thus giving a large variation of horizontal eddy diffusivities, we use a direct method to evaluate the influence of different ambient parameters such as wave height and wind on coastal dispersion. Measurements of the diffusivity are made by digital processing of images taken from from video recordings of the sea surface near the coast. The use of image analysis allows to estimate both spatial and temporal characteristics of wave fields, surface circulation and mixing in the surf zone, near Wave breakers and inside Harbours. The study of near-shore dispersion [1], with the added complexity of the interaction between wave fields, longshore currents, turbulence and beach morphology, needs detailed measurements of simple mixing processes to compare the respective influences of forcings at different scales. The measurements include simultaneous time series of waves, currents, wind velocities from the studied area. Cuantitative information from the video images is accomplished using the DigImage video processing system [3], and a frame grabber. The video may be controlled by the computer, allowing, remote control of the processing. Spectral analysis on the images has also used n order to estimate dominant wave periods as well as the dispersion relations of dominant instabilities. The measurements presented here consist mostly on the comarison of difussion coeficients measured by evaluating the spread of blobs of dye (milk) as well as by measuring the separation between different buoys released at the same time. We have used a techniques, developed by Bahia(1997), Diez(1998) and Bezerra(2000)[1-3] to study turbulent diffusion by means of digital processing of images taken from remote sensing and video recordings of the sea surface. The use of image analysis allows to measure variations of several decades in horizontal diffusivity values, the comparison of the diffusivities between different sites is not direct and a good understanding of the dominant mixing processes is needed. There is an increase of diffusivity with wave height but only for large Wave Reynolds numbers. Other important factors are wind speed and tidal currents. The horizontal diffusivity shows a marked anisotropy as a function of wave height and distance from the coast. The measurements were performed under a variety of weather conditions conditional sampling has been used to identify the different influences of the environmental agents on the actual effective horizontal diffusion[4]. [1] Bahia E. (1998) "Un estudio numerico experimental de la dispersion de contaminantes en aguas costeras, PhD Tesis UPC, Barcelona. [2] Bezerra M.O., (2000) "Diffusion de contaminantes en la costa. , PhD Tesis Uni. De Barcelona, Barcelona. [3] Diez M. (1998) "Estudio de la Hidrodinamica de la zona de rompientes mediante el analisis digital de imagenes. Master Thesis, UPC, Barcelona. [4] Artale V., Boffetta G., Celani A., Cencini M. and Vulpiani A., 1997, "Dispersion of passive tracers in closed basins: Beyond the diffusion coefficient", Physics of Fluids, vol 9, pp 3162-1997
Baldwin, Andrew C W; Mallidi, Hari R; Baldwin, John C; Sandoval, Elena; Cohn, William E; Frazier, O H; Singh, Steve K
2016-01-01
In the setting of increasingly complex medical therapies and limited physician resources, the recent emergence of 'smart' technology offers tremendous potential for improved logistics, efficiency, and communication between medical team members. In an effort to harness these capabilities, we sought to evaluate the utility of this technology in surgical practice through the employment of a wearable camera device during cardiothoracic organ recovery. A single procurement surgeon was trained for use of an Explorer Edition Google Glass (Google Inc., Mountain View, CA) during the recovery process. Live video feed of each procedure was securely broadcast to allow for members of the home transplant team to remotely participate in organ assessment. Primary outcomes involved demonstration of technological feasibility and validation of quality assurance through group assessment. The device was employed for the recovery of four organs: a right single lung, a left single lung, and two bilateral lung harvests. Live video of the visualization process was remotely accessed by the home transplant team, and supplemented final verification of organ quality. In each case, the organs were accepted for transplant without disruption of standard procurement protocols. Media files generated during the procedures were stored in a secure drive for future documentation, evaluation, and education purposes without preservation of patient identifiers. Live video streaming can improve quality assurance measures by allowing off-site members of the transplant team to participate in the final assessment of donor organ quality. While further studies are needed, this project suggests that the application of mobile 'smart' technology offers not just immediate value, but the potential to transform our approach to the practice of medicine.
Image-based tracking and sensor resource management for UAVs in an urban environment
NASA Astrophysics Data System (ADS)
Samant, Ashwin; Chang, K. C.
2010-04-01
Coordination and deployment of multiple unmanned air vehicles (UAVs) requires a lot of human resources in order to carry out a successful mission. The complexity of such a surveillance mission is significantly increased in the case of an urban environment where targets can easily escape from the UAV's field of view (FOV) due to intervening building and line-of-sight obstruction. In the proposed methodology, we focus on the control and coordination of multiple UAVs having gimbaled video sensor onboard for tracking multiple targets in an urban environment. We developed optimal path planning algorithms with emphasis on dynamic target prioritizations and persistent target updates. The command center is responsible for target prioritization and autonomous control of multiple UAVs, enabling a single operator to monitor and control a team of UAVs from a remote location. The results are obtained using extensive 3D simulations in Google Earth using Tangent plus Lyapunov vector field guidance for target tracking.
Packet spacing : an enabling mechanism for delivering multimedia content in computational grids /
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, A. C.; Feng, W. C.; Belford, Geneva G.
2001-01-01
Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-efort traflc is served by congestion-controlled TCF! Consequently, UDP steals bandwidth from TCP such that TCP$ows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address thismore » problem, we introduce the counterintuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over SO% without adversely afecting delivered throughput. Keywords: network protocol, multimedia, packet spacing, streaming, TCI: UDlq rate-adjusting congestion control, computational grid, Access Grid.« less
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, A.; Kollarits, Richard V.; Haskell, Barry G.
1995-10-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Compression of stereoscopic video using MPEG-2
NASA Astrophysics Data System (ADS)
Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.
1995-12-01
Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.
Depth Perception In Remote Stereoscopic Viewing Systems
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Von Sydow, Marika
1989-01-01
Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.
Vision systems for manned and robotic ground vehicles
NASA Astrophysics Data System (ADS)
Sanders-Reed, John N.; Koon, Phillip L.
2010-04-01
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.
Telearch - Integrated visual simulation environment for collaborative virtual archaeology.
NASA Astrophysics Data System (ADS)
Kurillo, Gregorij; Forte, Maurizio
Archaeologists collect vast amounts of digital data around the world; however, they lack tools for integration and collaborative interaction to support reconstruction and interpretation process. TeleArch software is aimed to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars inside a shared virtual environment. The framework also includes audio, 2D and 3D video streaming technology to facilitate remote presence of users. In this paper, we present several experimental case studies to demonstrate the integration and interaction with 3D models and geographical information system (GIS) data in this collaborative environment.
Video requirements for remote medical diagnosis
NASA Technical Reports Server (NTRS)
Davis, J. G.
1974-01-01
Minimal television system requirements for medical telediagnosis were studied. The experiment was conducted with the aid of a simulated telemedicine system. The first step involved making high quality videotape recordings of actual medical examinations conducted by a skilled nurse under the direction of a physician watching on closed circuit television. These recordings formed the baseline for the study. Next, these videotape recordings were electronically degraded to simulate television systems of less than broadcast quality. Finally, the baseline and degraded video recordings were shown (via a statistically randomized procedure) to a large number of physicians who attempted to reach a correct medical diagnosis and to visually recognize key physical signs for each patient. By careful scoring and analysis of the results of these viewings, the pictorial and diagnostic limitations as a function of technical video characteristics were to be defined.
Design of a Wireless Sensor Network Platform for Tele-Homecare
Chung, Yu-Fang; Liu, Chia-Hui
2013-01-01
The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients' families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients' time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users' physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users' environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel. PMID:24351630
Design of a wireless sensor network platform for tele-homecare.
Chung, Yu-Fang; Liu, Chia-Hui
2013-12-12
The problem of an ageing population has become serious in the past few years as the degeneration of various physiological functions has resulted in distinct chronic diseases in the elderly. Most elderly are not willing to leave home for healthcare centers, but caring for patients at home eats up caregiver resources, and can overwhelm patients' families. Besides, a lot of chronic disease symptoms cause the elderly to visit hospitals frequently. Repeated examinations not only exhaust medical resources, but also waste patients' time and effort. To make matters worse, this healthcare system does not actually appear to be effective as expected. In response to these problems, a wireless remote home care system is designed in this study, where ZigBee is used to set up a wireless network for the users to take measurements anytime and anywhere. Using suitable measuring devices, users' physiological signals are measured, and their daily conditions are monitored by various sensors. Being transferred through ZigBee network, vital signs are analyzed in computers which deliver distinct alerts to remind the users and the family of possible emergencies. The system could be further combined with electric appliances to remotely control the users' environmental conditions. The environmental monitoring function can be activated to transmit in real time dynamic images of the cared to medical personnel through the video function when emergencies occur. Meanwhile, in consideration of privacy, the video camera would be turned on only when it is necessary. The caregiver could adjust the angle of camera to a proper position and observe the current situation of the cared when a sensor on the cared or the environmental monitoring system detects exceptions. All physiological data are stored in the database for family enquiries or accurate diagnoses by medical personnel.
Image acquisition system for traffic monitoring applications
NASA Astrophysics Data System (ADS)
Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben
1995-03-01
An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.
Playing Action Video Games Improves Visuomotor Control.
Li, Li; Chen, Rongrong; Chen, Jing
2016-08-01
Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving. © The Author(s) 2016.
Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission
NASA Astrophysics Data System (ADS)
Bieszczad, Grzegorz
2015-05-01
In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.
Remote Adaptive Motor Resistance Training Exercise Apparatus and Method of Use Thereof
NASA Technical Reports Server (NTRS)
Reich, Alton (Inventor); Shaw, James (Inventor)
2017-01-01
The invention comprises a method and/or an apparatus using a computer configured exercise system equipped with an electric motor to provide physical resistance to user motion in conjunction with means for sharing exercise system related data and/or user performance data with a secondary user, such as a medical professional, a physical therapist, a trainer, a computer generated competitor, and/or a human competitor. For example, the exercise system is used with a remote trainer to enhance exercise performance, with a remote medical professional for rehabilitation, and/or with a competitor in a competition, such as in a power/weightlifting competition or in a video game. The exercise system is optionally configured with an intelligent software assistant and knowledge navigator functioning as a personal assistant application.
Remote Adaptive Motor Resistance Training Exercise Apparatus and Method of Use Thereof
NASA Technical Reports Server (NTRS)
Shaw, James (Inventor); Reich, Alton (Inventor)
2016-01-01
The invention comprises a method and/or an apparatus using a computer configured exercise system equipped with an electric motor to provide physical resistance to user motion in conjunction with means for sharing exercise system related data and/or user performance data with a secondary user, such as a medical professional, a physical therapist, a trainer, a computer generated competitor, and/or a human competitor. For example, the exercise system is used with a remote trainer to enhance exercise performance, with a remote medical professional for rehabilitation, and/or with a competitor in a competition, such as in a power/weightlifting competition or in a video game. The exercise system is optionally configured with an intelligent software assistant and knowledge navigator functioning as a personal assistant application.
Assessing the impact of telestration on surgical telementoring: A randomized controlled trial.
Budrionis, Andrius; Hasvold, Per; Hartvigsen, Gunnar; Bellika, Johan Gustav
2016-01-01
Using graphical annotations in surgical telementoring promises vast improvements in both clinical and educational outcomes. However, these assumptions do not consider the potential patient safety risks resulting from this feature. Major differences in regulations regarding the implementation of telestration encourage an assessment of the utility of this feature on the outcomes of telementoring sessions. Eight students participated in a randomized controlled trial, comparing verbal with annotation-supplemented telementoring via video conferencing. A remote mentor guided the participants through four localization exercises, identifying the features in a still laparoscopic surgery scene using a laparoscopic simulator. Clinical and educational outcomes were assessed; the time consumption and quality of mentoring were determined. The study revealed no significant difference in localizing the intervention between the studied methods, while educational outcomes favoured verbal mentoring. Telestration-supplemented guidance was considerably faster and resulted in fewer miscommunications between the mentor and mentee. The initial hypothesis of the major clinical and education benefits of telestration in telementoring was not supported. A potential 33% decrease in the duration of the mentored episodes is expected due to the ability to annotate live video content. However, the impact of time saving on the outcome of the procedure remains unclear. Regardless of the quantitative measures, most of the participants and the mentor agreed that graphical annotations provide advantages over verbal guidance. © The Author(s) 2015.
A low-cost test-bed for real-time landmark tracking
NASA Astrophysics Data System (ADS)
Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher
2007-04-01
A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.
The Cam Shell: An Innovative Design With Materials and Manufacturing
NASA Technical Reports Server (NTRS)
Chung, W. Richard; Larsen, Frank M.; Kornienko, Rob
2003-01-01
Most of the personal audio and video recording devices currently sold on the open market all require hands to operate. Little consideration was given to designing a hands-free unit. Such a system once designed and made available to the public could greatly benefit mobile police officers, bicyclists, adventurers, street and dirt motorcyclists, horseback riders and many others. With a few design changes water sports and skiing activities could be another large area of application. The cam shell is an innovative design in which an audio and video recording device (such as palm camcorder) is housed in a body-mounted protection system. This system is based on the concept of viewing and recording at the same time. A view cam is attached to a helmet wired to a recording unit encased in a transparent body-mounted protection system. The helmet can also be controlled by remote. The operator will have full control in recording everything. However, the recording unit will be operated completely hands-free. This project will address the design considerations and their effects on material selection and manufacturing. It will enhance the understanding of the structure of materials, and how the structure affects the behavior of the material, and the role that processing play in linking the relationship between structure and properties. A systematic approach to design feasibility study, cost analysis and problem solving will also be discussed.
Small Moving Vehicle Detection in a Satellite Video of an Urban Area
Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng
2016-01-01
Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091
Baker, Katharine S; Georgiou-Karistianis, Nellie; Lampit, Amit; Valenzuela, Michael; Gibson, Stephen J; Giummarra, Melita J
2018-04-01
Chronic pain is associated with reduced efficiency of cognitive performance, and few studies have investigated methods of remediation. We trialled a computerised cognitive training protocol to determine whether it could attenuate cognitive difficulties in a chronic pain sample. Thirty-nine adults with chronic pain (mean age = 43.3, 61.5% females) were randomised to an 8-week online course (3 sessions/week from home) of game-like cognitive training exercises, or an active control involving watching documentary videos. Participants received weekly supervision by video call. Primary outcomes were a global neurocognitive composite (tests of attention, speed, and executive function) and self-reported cognition. Secondary outcomes were pain (intensity; interference), mood symptoms (depression; anxiety), and coping with pain (catastrophising; self-efficacy). Thirty participants (15 training and 15 control) completed the trial. Mixed model intention-to-treat analyses revealed significant effects of training on the global neurocognitive composite (net effect size [ES] = 0.43, P = 0.017), driven by improved executive function performance (attention switching and working memory). The control group reported improvement in pain intensity (net ES = 0.65, P = 0.022). Both groups reported subjective improvements in cognition (ES = 0.28, P = 0.033) and catastrophising (ES = 0.55, P = 0.006). Depression, anxiety, self-efficacy, and pain interference showed no change in either group. This study provides preliminary evidence that supervised cognitive training may be a viable method for enhancing cognitive skills in persons with chronic pain, but transfer to functional and clinical outcomes remains to be demonstrated. Active control results suggest that activities perceived as relaxing or enjoyable contribute to improved perception of well-being. Weekly contact was pivotal to successful program completion.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-27
... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-770] In the Matter of Certain Video Game Systems... importation of certain video game systems and wireless controllers and components thereof by reason of... sale within the United States after importation of certain video game systems and wireless controllers...
ERIC Educational Resources Information Center
Reading, Chris; Auh, Myung-Sook; Pegg, John; Cybula, Peter
2013-01-01
The need for Australian school students to develop a strong understanding of Asian culture has been recognised in the cross-curriculum priority, "Asia and Australia's Engagement with Asia," of the Australian Curriculum. School students in rural and remote Australia have limited opportunities to engage with Asians and learn about their…
Steps to Offering Low Vision Rehabilitation Services through Clinical Video Telehealth
ERIC Educational Resources Information Center
Ihirig, Carolyn
2016-01-01
Telehealth clinical applications, which allow medical professionals to use telecommunications technologies to provide services to individuals remotely, continue to expand in areas such as low vision rehabilitation, where evaluations are provided to patients who live in rural areas. As with face-to-face low vision rehabilitation, the goal of…
Streaming Media Seminar--Effective Development and Distribution of Streaming Multimedia in Education
ERIC Educational Resources Information Center
Mainhart, Robert; Gerraughty, James; Anderson, Kristine M.
2004-01-01
Concisely defined, "streaming media" is moving video and/or audio transmitted over the Internet for immediate viewing/listening by an end user. However, at Saint Francis University's Center of Excellence for Remote and Medically Under-Served Areas (CERMUSA), streaming media is approached from a broader perspective. The working definition includes…
Framework for Processing Videos in the Presence of Spatially Varying Motion Blur
2016-02-10
Photogrammetric Engineering and Remote Sensing, vol. 71, no. 11, pp. 1285–1294, 2005. 3 [14] Le Yu, Dengrong Zhang, and Eun- Jung Holden, “A fast and...Xiaoyang Wang, Qiang Ji, Kishore K. Reddy, Mubarak Shah, Carl Vondrick, Hamed Pirsiavash, Deva Ramanan, Jenny Yuen, Antonio Tor- ralba, Bi Song, Anesco
Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management
ERIC Educational Resources Information Center
Osawa, Noritaka; Asai, Kikuo
2008-01-01
A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…
Fire behavior sensor package remote trigger design
Dan Jimenez; Jason Forthofer; James Reardon; Bret Butler
2007-01-01
Fire behavior characteristics (such as temperature, radiant and total heat flux, 2- and 3-dimensional velocities, and air flow) are extremely difficult to measure insitu. Although insitu sensor packages are capable of such measurements in realtime, it is also essential to acquire video documentation as a means of better understanding the fire behavior data recorded by...
Code of Federal Regulations, 2012 CFR
2012-07-01
...-INSPECTION PROVISIONS § 75.1 Definitions. (a) Terms used in this part shall have the meanings set forth in 18... other commercial interest in the sexually explicit material, printing, and video duplication; (ii... forth in 18 U.S.C. 2510(15). (j) Remote computing service has the meaning set forth in 18 U.S.C. 2711(2...
Code of Federal Regulations, 2013 CFR
2013-07-01
...-INSPECTION PROVISIONS § 75.1 Definitions. (a) Terms used in this part shall have the meanings set forth in 18... other commercial interest in the sexually explicit material, printing, and video duplication; (ii... forth in 18 U.S.C. 2510(15). (j) Remote computing service has the meaning set forth in 18 U.S.C. 2711(2...
Code of Federal Regulations, 2011 CFR
2011-07-01
...-INSPECTION PROVISIONS § 75.1 Definitions. (a) Terms used in this part shall have the meanings set forth in 18... other commercial interest in the sexually explicit material, printing, and video duplication; (ii... forth in 18 U.S.C. 2510(15). (j) Remote computing service has the meaning set forth in 18 U.S.C. 2711(2...
Code of Federal Regulations, 2014 CFR
2014-07-01
...-INSPECTION PROVISIONS § 75.1 Definitions. (a) Terms used in this part shall have the meanings set forth in 18... other commercial interest in the sexually explicit material, printing, and video duplication; (ii... forth in 18 U.S.C. 2510(15). (j) Remote computing service has the meaning set forth in 18 U.S.C. 2711(2...
Code of Federal Regulations, 2010 CFR
2010-07-01
...-INSPECTION PROVISIONS § 75.1 Definitions. (a) Terms used in this part shall have the meanings set forth in 18... other commercial interest in the sexually explicit material, printing, and video duplication; (ii... forth in 18 U.S.C. 2510(15). (j) Remote computing service has the meaning set forth in 18 U.S.C. 2711(2...
Roberts, Louise; Pérez-Domínguez, Rafael; Elliott, Michael
2016-11-15
Free-ranging individual fish were observed using a baited remote underwater video (BRUV) system during sound playback experiments. This paper reports on test trials exploring BRUV design parameters, image analysis and practical experimental designs. Three marine species were exposed to playback noise, provided as examples of behavioural responses to impulsive sound at 163-171dB re 1μPa (peak-to-peak SPL) and continuous sound of 142.7dB re 1μPa (RMS, SPL), exhibiting directional changes and accelerations. The methods described here indicate the efficacy of BRUV to examine behaviour of free-ranging species to noise playback, rather than using confinement. Given the increasing concern about the effects of water-borne noise, for example its inclusion within the EU Marine Strategy Framework Directive, and the lack of empirical evidence in setting thresholds, this paper discusses the use of BRUV, and short term behavioural changes, in supporting population level marine noise management. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fiber optic video monitoring system for remote CT/MR scanners clinically accepted
NASA Astrophysics Data System (ADS)
Tecotzky, Raymond H.; Bazzill, Todd M.; Eldredge, Sandra L.; Tagawa, James; Sayre, James W.
1992-07-01
With the proliferation of CT travel to distant scanners to review images before their patients can be released. We designed a fiber-optic broadband video system to transmit images from seven scanner consoles to fourteen remote monitoring stations in real time. This system has been used clinically by radiologists for over one years. We designed and conducted a user survey to categorize the levels of system use by section (Chest, GI, GU, Bone, Neuro, Peds, etc.), to measure operational utilization and acceptance of the system into the clinical environment, to clarify the system''s importance as a clinical tool for saving radiologists travel-time to distant CT the system''s performance and limitations as a diagnostic tool. The study was administered directly to radiologists using a printed survey form. The results of the survey''s compiled data show a high percentage of system usage by a wide spectrum of radiologists. Clearly, this system has been accepted into the clinical environment as a highly valued diagnostic tool in terms of time savings and functional flexibility.
Video game performances are preserved in ADHD children compared with controls.
Bioulac, Stéphanie; Lallemand, Stéphanie; Fabrigoule, Colette; Thoumy, Anne-Laure; Philip, Pierre; Bouvard, Manuel Pierre
2014-08-01
Although ADHD and excessive video game playing have received some attention, few studies have explored the performances of ADHD children when playing video games. The authors hypothesized that performances of ADHD children would be as good as those of control children in motivating video games tasks but not in the Continuous Performance Test II (CPT II). The sample consisted of 26 ADHD children and 16 control children. Performances of ADHD and control children were compared on three commercially available games, on the repetition of every game, and on the CPT II. ADHD children had lower performances on the CPT II than did controls, but they exhibited equivalent performances to controls when playing video games at both sessions and on all three games. When playing video games, ADHD children present no difference in inhibitory performances compared with control children. This demonstrates that cognitive difficulties in ADHD are task dependent. © 2012 SAGE Publications.
Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M
2016-01-26
Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Remote-Sensing Time Series Analysis, a Vegetation Monitoring Tool
NASA Technical Reports Server (NTRS)
McKellip, Rodney; Prados, Donald; Ryan, Robert; Ross, Kenton; Spruce, Joseph; Gasser, Gerald; Greer, Randall
2008-01-01
The Time Series Product Tool (TSPT) is software, developed in MATLAB , which creates and displays high signal-to- noise Vegetation Indices imagery and other higher-level products derived from remotely sensed data. This tool enables automated, rapid, large-scale regional surveillance of crops, forests, and other vegetation. TSPT temporally processes high-revisit-rate satellite imagery produced by the Moderate Resolution Imaging Spectroradiometer (MODIS) and by other remote-sensing systems. Although MODIS imagery is acquired daily, cloudiness and other sources of noise can greatly reduce the effective temporal resolution. To improve cloud statistics, the TSPT combines MODIS data from multiple satellites (Aqua and Terra). The TSPT produces MODIS products as single time-frame and multitemporal change images, as time-series plots at a selected location, or as temporally processed image videos. Using the TSPT program, MODIS metadata is used to remove and/or correct bad and suspect data. Bad pixel removal, multiple satellite data fusion, and temporal processing techniques create high-quality plots and animated image video sequences that depict changes in vegetation greenness. This tool provides several temporal processing options not found in other comparable imaging software tools. Because the framework to generate and use other algorithms is established, small modifications to this tool will enable the use of a large range of remotely sensed data types. An effective remote-sensing crop monitoring system must be able to detect subtle changes in plant health in the earliest stages, before the effects of a disease outbreak or other adverse environmental conditions can become widespread and devastating. The integration of the time series analysis tool with ground-based information, soil types, crop types, meteorological data, and crop growth models in a Geographic Information System, could provide the foundation for a large-area crop-surveillance system that could identify a variety of plant phenomena and improve monitoring capabilities.