An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
NASA GES DISC Level 2 Aerosol Analysis and Visualization Services
NASA Technical Reports Server (NTRS)
Wei, Jennifer; Petrenko, Maksym; Ichoku, Charles; Yang, Wenli; Johnson, James; Zhao, Peisheng; Kempler, Steve
2015-01-01
Overview of NASA GES DISC Level 2 aerosol analysis and visualization services: DQViz (Data Quality Visualization)MAPSS (Multi-sensor Aerosol Products Sampling System), and MAPSS_Explorer (Multi-sensor Aerosol Products Sampling System Explorer).
Costa, Daniel G.; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian
2017-01-01
The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field. PMID:28067777
Costa, Daniel G; Collotta, Mario; Pau, Giovanni; Duran-Faundez, Cristian
2017-01-05
The advance of technologies in several areas has allowed the development of smart city applications, which can improve the way of life in modern cities. When employing visual sensors in that scenario, still images and video streams may be retrieved from monitored areas, potentially providing valuable data for many applications. Actually, visual sensor networks may need to be highly dynamic, reflecting the changing of parameters in smart cities. In this context, characteristics of visual sensors and conditions of the monitored environment, as well as the status of other concurrent monitoring systems, may affect how visual sensors collect, encode and transmit information. This paper proposes a fuzzy-based approach to dynamically configure the way visual sensors will operate concerning sensing, coding and transmission patterns, exploiting different types of reference parameters. This innovative approach can be considered as the basis for multi-systems smart city applications based on visual monitoring, potentially bringing significant results for this research field.
Human Mobility Monitoring in Very Low Resolution Visual Sensor Network
Bo Bo, Nyan; Deboeverie, Francis; Eldib, Mohamed; Guan, Junzhi; Xie, Xingzhe; Niño, Jorge; Van Haerenborgh, Dirk; Slembrouck, Maarten; Van de Velde, Samuel; Steendam, Heidi; Veelaert, Peter; Kleihorst, Richard; Aghajan, Hamid; Philips, Wilfried
2014-01-01
This paper proposes an automated system for monitoring mobility patterns using a network of very low resolution visual sensors (30 × 30 pixels). The use of very low resolution sensors reduces privacy concern, cost, computation requirement and power consumption. The core of our proposed system is a robust people tracker that uses low resolution videos provided by the visual sensor network. The distributed processing architecture of our tracking system allows all image processing tasks to be done on the digital signal controller in each visual sensor. In this paper, we experimentally show that reliable tracking of people is possible using very low resolution imagery. We also compare the performance of our tracker against a state-of-the-art tracking method and show that our method outperforms. Moreover, the mobility statistics of tracks such as total distance traveled and average speed derived from trajectories are compared with those derived from ground truth given by Ultra-Wide Band sensors. The results of this comparison show that the trajectories from our system are accurate enough to obtain useful mobility statistics. PMID:25375754
Design of smart home sensor visualizations for older adults.
Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George
2014-01-01
Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date. Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.
Design of smart home sensor visualizations for older adults.
Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George
2014-07-24
Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date.CONCLUSIONS: Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.
Simulation and animation of sensor-driven robots.
Chen, C; Trivedi, M M; Bidlack, C R
1994-10-01
Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.
Behavior analysis for elderly care using a network of low-resolution visual sensors
NASA Astrophysics Data System (ADS)
Eldib, Mohamed; Deboeverie, Francis; Philips, Wilfried; Aghajan, Hamid
2016-07-01
Recent advancements in visual sensor technologies have made behavior analysis practical for in-home monitoring systems. The current in-home monitoring systems face several challenges: (1) visual sensor calibration is a difficult task and not practical in real-life because of the need for recalibration when the visual sensors are moved accidentally by a caregiver or the senior citizen, (2) privacy concerns, and (3) the high hardware installation cost. We propose to use a network of cheap low-resolution visual sensors (30×30 pixels) for long-term behavior analysis. The behavior analysis starts by visual feature selection based on foreground/background detection to track the motion level in each visual sensor. Then a hidden Markov model (HMM) is used to estimate the user's locations without calibration. Finally, an activity discovery approach is proposed using spatial and temporal contexts. We performed experiments on 10 months of real-life data. We show that the HMM approach outperforms the k-nearest neighbor classifier against ground truth for 30 days. Our framework is able to discover 13 activities of daily livings (ADL parameters). More specifically, we analyze mobility patterns and some of the key ADL parameters to detect increasing or decreasing health conditions.
Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite
2016-09-01
aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
Simulation and animation of sensor-driven robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, C.; Trivedi, M.M.; Bidlack, C.R.
1994-10-01
Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando
2009-01-01
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
Visualization of Heart Sounds and Motion Using Multichannel Sensor
NASA Astrophysics Data System (ADS)
Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko
2010-06-01
As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.
Smart unattended sensor networks with scene understanding capabilities
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2006-05-01
Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.
A 3D particle visualization system for temperature management
NASA Astrophysics Data System (ADS)
Lange, B.; Rodriguez, N.; Puech, W.; Rey, H.; Vasques, X.
2011-01-01
This paper deals with a 3D visualization technique proposed to analyze and manage energy efficiency from a data center. Data are extracted from sensors located in the IBM Green Data Center in Montpellier France. These sensors measure different information such as hygrometry, pressure and temperature. We want to visualize in real-time the large among of data produced by these sensors. A visualization engine has been designed, based on particles system and a client server paradigm. In order to solve performance problems, a Level Of Detail solution has been developed. These methods are based on the earlier work introduced by J. Clark in 1976. In this paper we introduce a particle method used for this work and subsequently we explain different simplification methods applied to improve our solution.
NASA Astrophysics Data System (ADS)
McGuire, M. P.; Welty, C.; Gangopadhyay, A.; Karabatis, G.; Chen, Z.
2006-05-01
The urban environment is formed by complex interactions between natural and human dominated systems, the study of which requires the collection and analysis of very large datasets that span many disciplines. Recent advances in sensor technology and automated data collection have improved the ability to monitor urban environmental systems and are making the idea of an urban environmental observatory a reality. This in turn has created a number of potential challenges in data management and analysis. We present the design of an end-to-end system to store, analyze, and visualize data from a prototype urban environmental observatory based at the Baltimore Ecosystem Study, a National Science Foundation Long Term Ecological Research site (BES LTER). We first present an object-relational design of an operational database to store high resolution spatial datasets as well as data from sensor networks, archived data from the BES LTER, data from external sources such as USGS NWIS, EPA Storet, and metadata. The second component of the system design includes a spatiotemporal data warehouse consisting of a data staging plan and a multidimensional data model designed for the spatiotemporal analysis of monitoring data. The system design also includes applications for multi-resolution exploratory data analysis, multi-resolution data mining, and spatiotemporal visualization based on the spatiotemporal data warehouse. Also the system design includes interfaces with water quality models such as HSPF, SWMM, and SWAT, and applications for real-time sensor network visualization, data discovery, data download, QA/QC, and backup and recovery, all of which are based on the operational database. The system design includes both internet and workstation-based interfaces. Finally we present the design of a laboratory for spatiotemporal analysis and visualization as well as real-time monitoring of the sensor network.
Updates to SCORPION persistent surveillance system with universal gateway
NASA Astrophysics Data System (ADS)
Coster, Michael; Chambers, Jon; Winters, Michael; Brunck, Al
2008-10-01
This paper addresses benefits derived from the universal gateway utilized in Northrop Grumman Systems Corporation's (NGSC) SCORPION, a persistent surveillance and target recognition system produced by the Xetron campus in Cincinnati, Ohio. SCORPION is currently deployed in Operations Iraqi Freedom (OIF) and Enduring Freedom (OEF). The SCORPION universal gateway is a flexible, field programmable system that provides integration of over forty Unattended Ground Sensor (UGS) types from a variety of manufacturers, multiple visible and thermal electro-optical (EO) imagers, and numerous long haul satellite and terrestrial communications links, including the Army Research Lab (ARL) Blue Radio. Xetron has been integrating best in class sensors with this universal gateway to provide encrypted data exfiltration to Common Operational Picture (COP) systems and remote sensor command and control since 1998. In addition to being fed to COP systems, SCORPION data can be visualized in the Common sensor Status (CStat) graphical user interface that allows for viewing and analysis of images and sensor data from up to seven hundred SCORPION system gateways on single or multiple displays. This user friendly visualization enables a large amount of sensor data and imagery to be used as actionable intelligence by a minimum number of analysts.
Bock, Christian; Demiris, George; Choi, Yong; Le, Thai; Thompson, Hilaire J; Samuel, Arjmand; Huang, Danny
2016-03-11
The use of smart home sensor systems is growing primarily due to the appeal of unobtrusively monitoring older adult health and wellness. However, integrating large-scale sensor systems within residential settings can be challenging when deployment takes place across multiple environments, requiring customization of applications, connection across various devices and effective visualization of complex longitudinal data. The objective of the study was to demonstrate the implementation of a smart home system using an open, extensible platform in a real-world setting and develop an application to visualize data real time. We deployed the open source Lab of Things platform in a house of 11 residents as a demonstration of feasibility over the course of 3 months. The system consisted of Aeon Labs Z-wave Door/Window sensors and an Aeon Labs Multi-sensor that collected data on motion, temperature, luminosity, and humidity. We applied a Rapid Iterative Testing and Evaluation approach towards designing a visualization interface engaging gerontological experts. We then conducted a survey with 19 older adult and caregiver stakeholders to inform further design revisions. Our initial visualization mockups consisted of a bar chart representing activity level over time. Family members felt comfortable using the application. Older adults however, indicated it would be difficult to learn to use the application, and had trouble identifying utility. A key for older adults was ensuring that the data collected could be utilized by their family members, physicians, or caregivers. The approach described in this work is generalizable towards future smart home deployments and can be a valuable guide for researchers to scale a study across multiple homes and connected devices, and to create personalized interfaces for end users.
The Vestibular System and Human Dynamic Space Orientation
NASA Technical Reports Server (NTRS)
Meiry, J. L.
1966-01-01
The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.
Visualization Component of Vehicle Health Decision Support System
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Turmon, Michael; Stough, Timothy; Siegel, Herbert; Walter, patrick; Kurt, Cindy
2008-01-01
The visualization front-end of a Decision Support System (DSS) also includes an analysis engine linked to vehicle telemetry, and a database of learned models for known behaviors. Because the display is graphical rather than text-based, the summarization it provides has a greater information density on one screen for evaluation by a flight controller.This tool provides a system-level visualization of the state of a vehicle, and drill-down capability for more details and interfaces to separate analysis algorithms and sensor data streams. The system-level view is a 3D rendering of the vehicle, with sensors represented as icons, tied to appropriate positions within the vehicle body and colored to indicate sensor state (e.g., normal, warning, anomalous state, etc.). The sensor data is received via an Information Sharing Protocol (ISP) client that connects to an external server for real-time telemetry. Users can interactively pan, zoom, and rotate this 3D view, as well as select sensors for a detail plot of the associated time series data. Subsets of the plotted data can be selected and sent to an external analysis engine to either search for a similar time series in an historical database, or to detect anomalous events. The system overview and plotting capabilities are completely general in that they can be applied to any vehicle instrumented with a collection of sensors. This visualization component can interface with the ISP for data streams used by NASA s Mission Control Center at Johnson Space Center. In addition, it can connect to, and display results from, separate analysis engine components that identify anomalies or that search for past instances of similar behavior. This software supports NASA's Software, Intelligent Systems, and Modeling element in the Exploration Systems Research and Technology Program by augmenting the capability of human flight controllers to make correct decisions, thus increasing safety and reliability. It was designed specifically as a tool for NASA's flight controllers to monitor the International Space Station and a future Crew Exploration Vehicle.
33 CFR 127.201 - Sensing and alarm systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...
33 CFR 127.201 - Sensing and alarm systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...
33 CFR 127.201 - Sensing and alarm systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... systems. (a) Fixed sensors must have audio and visual alarms in the control room and audio alarms nearby. (b) Fixed sensors that continuously monitor for LNG vapors must— (1) Be in each enclosed area where vapor or gas may accumulate; and (2) Meet Section 9-4 of NFPA 59A. (c) Fixed sensors that continuously...
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
An intelligent surveillance platform for large metropolitan areas with dense sensor deployment.
Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A; Smilansky, Zeev
2013-06-07
This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage.
2D-Visualization of metabolic activity with planar optical chemical sensors (optodes)
NASA Astrophysics Data System (ADS)
Meier, R. J.; Liebsch, G.
2015-12-01
Microbia plays an outstandingly important role in many hydrologic compartments, such as e.g. the benthic community in sediments, or biologically active microorganisms in the capillary fringe, in ground water, or soil. Oxygen, pH, and CO2 are key factors and indicators for microbial activity. They can be measured using optical chemical sensors. These sensors record changing fluorescence properties of specific indicator dyes. The signals can be measured in a non-contact mode, even through transparent walls, which is important for many lab-experiments. They can measure in closed (transparent) systems, without sampling or intruding into the sample. They do not consume the analytes while measuring, are fully reversible and able to measure in non-stirred solutions. These sensors can be applied as high precision fiberoptic sensors (for profiling), robust sensor spots, or as planar sensors for 2D visualization (imaging). Imaging enables to detect thousands of measurement spots at the same time and generate 2D analyte maps over a region of interest. It allows for comparing different regions within one recorded image, visualizing spatial analyte gradients, or more important to identify hot spots of metabolic activity. We present ready-to-use portable imaging systems for the analytes oxygen, pH, and CO2. They consist of a detector unit, planar sensor foils and a software for easy data recording and evaluation. Sensors foils for various analytes and measurement ranges enable visualizing metabolic activity or analyte changes in the desired range. Dynamics of metabolic activity can be detected in one shot or over long time periods. We demonstrate the potential of this analytical technique by presenting experiments on benthic disturbance-recovery dynamics in sediments and microbial degradation of organic material in the capillary fringe. We think this technique is a new tool to further understand how microbial and geochemical processes are linked in (not solely) hydrologic systems.
Neuromorphic audio-visual sensor fusion on a sound-localizing robot.
Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André
2012-01-01
This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.
A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor
Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.
2015-01-01
For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135
Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael
2013-09-01
Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.
NASA Technical Reports Server (NTRS)
Hasell, P. G., Jr.
1974-01-01
The development and characteristics of a multispectral band scanner for an airborne mapping system are discussed. The sensor operates in the ultraviolet, visual, and infrared frequencies. Any twelve of the bands may be selected for simultaneous, optically registered recording on a 14-track analog tape recorder. Multispectral imagery recorded on magnetic tape in the aircraft can be laboratory reproduced on film strips for visual analysis or optionally machine processed in analog and/or digital computers before display. The airborne system performance is analyzed.
NASA Technical Reports Server (NTRS)
Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.
1992-01-01
This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.
Chen, Zhi; Chen, Jiayun; Pan, Dong; Li, Hongwei; Yao, Yunhui; Lyu, Zu; Yang, Liting; Ma, Li-Jun
2017-03-01
A new rhodamine B-based "reactive" optical sensor (1) for Hg 2+ was synthesized. Sensor 1 shows a unique colorimetric and fluorescent "turn-on" selectivity to Hg 2+ over 14 other metal ions with a hypersensitivity (detection limits are 27.6 nM (5.5 ppb) and 6.9 nM (1.4 ppb), respectively) in neutral buffer solution. To test its applicability in the environment, sensor 1 was applied to quantify and visualize low levels of Hg 2+ in tap water and river water samples. The results indicate sensor 1 is a highly sensitive fluorescent sensor for Hg 2+ with a detection limit of 1.7 ppb in tap water and river water. Moreover, sensor 1 is a convenient visualizing sensor for low levels of Hg 2+ (0.1 ppm) in water environment (from colorless to light pink). In addition, sensor 1 shows good potential as a fluorescent visualizing sensor for Hg 2+ in fetal bovine serum and living 293T cells. The results indicate that sensor 1 shows good potential as a highly sensitive sensor for the detection of Hg 2+ in environmental and biological samples. Graphical Abstract A new rhodamine B-based "reactive" optical sensor (1) for Hg 2+ was synthesized. 1 shows a unique colorimetric and fluorescent "turn-on" selectivity to Hg 2+ over 14 other metal ions with a hypersensitivity in water environment. And it is a convenient visualizing probe for low levels of Hg 2+ in environment aqueous media, fetal bovine serum and living 293T cells.
NASA Astrophysics Data System (ADS)
Brady, J. J.; Tweedie, C. E.; Escapita, I. J.
2009-12-01
There is a fundamental need to improve capacities for monitoring environmental change using remote sensing technologies. Recently, researchers have begun using Unmanned Aerial Vehicles (UAVs) to expand and improve upon remote sensing capabilities. Limitations to most non-military and relatively small-scale Unmanned Aircraft Systems (UASs) include a need to develop more reliable communications between ground and aircraft, tools to optimize flight control, real time data processing, and visually ascertaining the quantity of data collected while in air. Here we present a prototype software system that has enhanced communication between ground and the vehicle, can synthesize near real time data acquired from sensors on board, can log operation data during flights, and can visually demonstrate the amount and quality of data for a sampling area. This software has the capacity to greatly improve the utilization of UAS in the environmental sciences. The software system is being designed for use on a paraglider UAV that has a suite of sensors suitable for characterizing the footprints of eddy covariance towers situated in the Chihuahuan Desert and in the Arctic. Sensors on board relay operational flight data (airspeed, ground speed, latitude, longitude, pitch, yaw, roll, acceleration, and video) as well as a suite of customized sensors. Additional sensors can be added to an on board laptop or a CR1000 data logger thereby allowing data from these sensors to be visualized in the prototype software. This poster will describe the development, use and customization of our UAS and multimedia will be available during AGU to illustrate the system in use. UAV on workbench in the lab UAV in flight
Learning receptor positions from imperfectly known motions
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1990-01-01
An algorithm is described for learning image interpolation functions for sensor arrays whose sensor positions are somewhat disordered. The learning is based on failures of translation invariance, so it does not require knowledge of the images being presented to the visual system. Previously reported implementations of the method assumed the visual system to have precise knowledge of the translations. It is demonstrated that translation estimates computed from the imperfectly interpolated images can have enough accuracy to allow the learning process to converge to a correct interpolation.
High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment
NASA Astrophysics Data System (ADS)
Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.
2006-12-01
The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.
An Intelligent Surveillance Platform for Large Metropolitan Areas with Dense Sensor Deployment
Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A.; Smilansky, Zeev
2013-01-01
This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage. PMID:23748169
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
New Hypervelocity Terminal Intercept Guidance Systems for Deflecting/Disrupting Hazardous Asteroids
NASA Astrophysics Data System (ADS)
Lyzhoft, Joshua Richard
Computational modeling and simulations of visual and infrared (IR) sensors are investigated for a new hypervelocity terminal guidance system of intercepting small asteroids (50 to 150 meters in diameter). Computational software tools for signal-to-noise ratio estimation of visual and IR sensors, estimation of minimum and maximum ranges of target detection, and GPU (Graphics Processing Units)-accelerated simulations of the IR-based terminal intercept guidance systems are developed. Scaled polyhedron models of known objects, such as the Rosetta mission's Comet 67P/C-G, NASA's OSIRIS-REx Bennu, and asteroid 433 Eros, are utilized in developing a GPU-based simulation tool for the IR-based terminal intercept guidance systems. A parallelized-ray tracing algorithm for simulating realistic surface-to-surface shadowing of irregular-shaped asteroids or comets is developed. Polyhedron solid-angle approximation is also considered. Using these computational models, digital image processing is investigated to determine single or multiple impact locations to assess the technical feasibility of new planetary defense mission concepts of utilizing a Hypervelocity Asteroid Intercept Vehicle (HAIV) or a Multiple Kinetic-energy Interceptor Vehicle (MKIV). Study results indicate that the IR-based guidance system outperforms the visual-based system in asteroid detection and tracking. When using an IR sensor, predicting impact locations from filtered images resulted in less jittery spacecraft control accelerations than conducting missions with a visual sensor. Infrared sensors have also the possibility to detect asteroids at greater distances, and if properly used, can aid in terminal phase guidance for proper impact location determination for the MKIV system. Emerging new topics of the Minimum Orbit Intersection Distance (MOID) estimation and the Full-Two-Body Problem (F2BP) formulation are also investigated to assess a potential near-Earth object collision risk and the proximity gravity effects of an irregular-shaped binary-asteroid target on a standoff nuclear explosion mission.
An Interoperable Architecture for Air Pollution Early Warning System Based on Sensor Web
NASA Astrophysics Data System (ADS)
Samadzadegan, F.; Zahmatkesh, H.; Saber, M.; Ghazi khanlou, H. J.
2013-09-01
Environmental monitoring systems deal with time-sensitive issues which require quick responses in emergency situations. Handling the sensor observations in near real-time and obtaining valuable information is challenging issues in these systems from a technical and scientific point of view. The ever-increasing population growth in urban areas has caused certain problems in developing countries, which has direct or indirect impact on human life. One of applicable solution for controlling and managing air quality by considering real time and update air quality information gathered by spatially distributed sensors in mega cities, using sensor web technology for developing monitoring and early warning systems. Urban air quality monitoring systems using functionalities of geospatial information system as a platform for analysing, processing, and visualization of data in combination with Sensor Web for supporting decision support systems in disaster management and emergency situations. This system uses Sensor Web Enablement (SWE) framework of the Open Geospatial Consortium (OGC), which offers a standard framework that allows the integration of sensors and sensor data into spatial data infrastructures. SWE framework introduces standards for services to access sensor data and discover events from sensor data streams as well as definition set of standards for the description of sensors and the encoding of measurements. The presented system provides capabilities to collect, transfer, share, process air quality sensor data and disseminate air quality status in real-time. It is possible to overcome interoperability challenges by using standard framework. In a routine scenario, air quality data measured by in-situ sensors are communicated to central station where data is analysed and processed. The extracted air quality status is processed for discovering emergency situations, and if necessary air quality reports are sent to the authorities. This research proposed an architecture to represent how integrate air quality sensor data stream into geospatial data infrastructure to present an interoperable air quality monitoring system for supporting disaster management systems by real time information. Developed system tested on Tehran air pollution sensors for calculating Air Quality Index (AQI) for CO pollutant and subsequently notifying registered users in emergency cases by sending warning E-mails. Air quality monitoring portal used to retrieving and visualize sensor observation through interoperable framework. This system provides capabilities to retrieve SOS observation using WPS in a cascaded service chaining pattern for monitoring trend of timely sensor observation.
Global Positioning System Synchronized Active Light Autonomous Docking System
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)
1996-01-01
A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.
Global Positioning System Synchronized Active Light Autonomous Docking System
NASA Technical Reports Server (NTRS)
Howard, Richard (Inventor)
1994-01-01
A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.
Real-time 3D visualization of volumetric video motion sensor data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, J.; Stansfield, S.; Shawver, D.
1996-11-01
This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less
The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration
NASA Astrophysics Data System (ADS)
Zhao, Ming; Han, Baoling
2016-11-01
The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.
Real-time distributed video coding for 1K-pixel visual sensor networks
NASA Astrophysics Data System (ADS)
Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian
2016-07-01
Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.
A navigation system for the visually impaired an intelligent white cane.
Fukasawa, A Jin; Magatani, Kazusihge
2012-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane, this sensor senses a color of navigation line and the system informs the visually impaired that he/she is walking along the navigation line by vibration. This color recognition system is controlled by a one-chip microprocessor. RFID tags and a receiver for these tags are used in the map information system. RFID tags are set on the colored navigation line. An antenna for RFID tags and a tag receiver are also installed on a white cane. The receiver receives the area information as a tag-number and notifies map information to the user by mp3 formatted pre-recorded voice. And now, we developed the direction identification technique. Using this technique, we can detect a user's walking direction. A triaxiality acceleration sensor is used in this system. Three normal subjects who were blindfolded with an eye mask were tested with our developed navigation system. All of them were able to walk along the navigation line perfectly. We think that the performance of the system is good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.
Yap, Florence G H; Yen, Hong-Hsu
2014-02-20
Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/ transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/ processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs.
Yap, Florence G. H.; Yen, Hong-Hsu
2014-01-01
Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs. PMID:24561401
NASA Astrophysics Data System (ADS)
Duffy, C.
2008-12-01
The future of environmental observing systems will utilize embedded sensor networks with continuous real- time measurement of hydrologic, atmospheric, biogeochemical, and ecological variables across diverse terrestrial environments. Embedded environmental sensors, benefitting from advances in information sciences, networking technology, materials science, computing capacity, and data synthesis methods, are undergoing revolutionary change. It is now possible to field spatially-distributed, multi-node sensor networks that provide density and spatial coverage previously accessible only via numerical simulation. At the same time, computational tools are advancing rapidly to the point where it is now possible to simulate the physical processes controlling individual parcels of water and solutes through the complete terrestrial water cycle. Our goal for the Penn State Critical Zone Observatory is to apply environmental sensor arrays, integrated hydrologic models, and state-of-the-art visualization deployed and coordinated at a testbed within the Penn State Experimental Forest. The Shale Hills Hydro_Sensorium prototype proposed here is designed to observe land-atmosphere interactions in four-dimensional (space and time). The term Hydro_Sensorium implies the totality of physical sensors, models and visualization tools that allow us to perceive the detailed space and time complexities of the water and energy cycle for a watershed or river basin for all physical states and fluxes (groundwater, soil moisture, temperature, streamflow, latent heat, snowmelt, chemistry, isotopes etc.). This research will ultimately catalyze the study of complex interactions between the land surface, subsurface, biological and atmospheric systems over a broad range of scales. The sensor array would be real-time and fully controllable by remote users for "computational steering" and data fusion. Presently fully-coupled physical models are being developed that link the atmosphere-land-vegetation-subsurface system into a fully-coupled distributed system. During the last 5 years the Penn State Integrated Hydrologic Modeling System has been under development as an open-source community modeling project funded by NSF EAR/GEO and NSF CBET/ENG. PIHM represents a strategy for the formulation and solution of fully-coupled process equations at the watershed and river basin scales, and includes a tightly coupled GIS tool for data handling, domain decomposition, optimal unstructured grid generation, and model parameterization. The sensor and simulation system has the following elements: 1) extensive, spatially-distributed, non- invasive, smart sensor networks to gather massive geologic, hydrologic, and geochemical data; 2) stochastic information fusion methods; 3) spatially-explicit multiphysics models/solutions of the land-vegetation- atmosphere system; and 4) asynchronous, parallel/distributed, adaptive algorithms for rapidly simulating the states of a basin at high resolution, 5) signal processing tools for data mining and parameter estimation, and 6) visualization tools. The prototype proposed sensor array and simulation system proposed here will offer a coherent new approach to environmental predictions with a fully integrated observing system design. We expect that the Shale Hills Hydro_Sensorium may provide the needed synthesis of information and conceptualization necessary to advance predictive understanding in complex hydrologic systems.
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
Studies of human dynamic space orientation using techniques of control theory
NASA Technical Reports Server (NTRS)
Young, L. R.
1974-01-01
Studies of human orientation and manual control in high order systems are summarized. Data cover techniques for measuring and altering orientation perception, role of non-visual motion sensors, particularly the vestibular and tactile sensors, use of motion cues in closed loop control of simple stable and unstable systems, and advanced computer controlled display systems.
A Novel Distributed Privacy Paradigm for Visual Sensor Networks Based on Sharing Dynamical Systems
NASA Astrophysics Data System (ADS)
Luh, William; Kundur, Deepa; Zourntos, Takis
2006-12-01
Visual sensor networks (VSNs) provide surveillance images/video which must be protected from eavesdropping and tampering en route to the base station. In the spirit of sensor networks, we propose a novel paradigm for securing privacy and confidentiality in a distributed manner. Our paradigm is based on the control of dynamical systems, which we show is well suited for VSNs due to its low complexity in terms of processing and communication, while achieving robustness to both unintentional noise and intentional attacks as long as a small subset of nodes are affected. We also present a low complexity algorithm called TANGRAM to demonstrate the feasibility of applying our novel paradigm to VSNs. We present and discuss simulation results of TANGRAM.
Real-time digital signal processing for live electro-optic imaging.
Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro
2009-08-31
We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.
Feng, Guohu; Wu, Wenqi; Wang, Jinling
2012-01-01
A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system is observable. It has been proved that such observability conditions are: (a) at least one degree of rotational freedom is excited, and (b) at least two linearly independent horizontal lines and one vertical line are observed. Experimental results have validated the correctness of these observability conditions. PMID:23012523
Veras, Eduardo J; De Laurentis, Kathryn J; Dubey, Rajiv
2008-01-01
This paper describes the design and implementation of a control system that integrates visual and haptic information to give assistive force feedback through a haptic controller (Omni Phantom) to the user. A sensor-based assistive function and velocity scaling program provides force feedback that helps the user complete trajectory following exercises for rehabilitation purposes. This system also incorporates a PUMA robot for teleoperation, which implements a camera and a laser range finder, controlled in real time by a PC, were implemented into the system to help the user to define the intended path to the selected target. The real-time force feedback from the remote robot to the haptic controller is made possible by using effective multithreading programming strategies in the control system design and by novel sensor integration. The sensor-based assistant function concept applied to teleoperation as well as shared control enhances the motion range and manipulation capabilities of the users executing rehabilitation exercises such as trajectory following along a sensor-based defined path. The system is modularly designed to allow for integration of different master devices and sensors. Furthermore, because this real-time system is versatile the haptic component can be used separately from the telerobotic component; in other words, one can use the haptic device for rehabilitation purposes for cases in which assistance is needed to perform tasks (e.g., stroke rehab) and also for teleoperation with force feedback and sensor assistance in either supervisory or automatic modes.
76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-14
... detected by infrared sensors can be much different from that detected by natural pilot vision. On a dark... by many imaging infrared systems. On the other hand, contrasting colors in visual wavelengths may be... of the EFVS image and the level of EFVS infrared sensor performance could depend significantly on...
Napolitano, Rebecca; Blyth, Anna; Glisic, Branko
2018-01-16
Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.
Napolitano, Rebecca; Blyth, Anna; Glisic, Branko
2018-01-01
Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included. PMID:29337877
Novel Visual Sensor Coverage and Deployment in Time Aware PTZ Wireless Visual Sensor Networks.
Yap, Florence G H; Yen, Hong-Hsu
2016-12-30
In this paper, we consider the visual sensor deployment algorithm in Pan-Tilt-Zoom (PTZ) Wireless Visual Sensor Networks (WVSNs). With PTZ capability, a sensor's visual coverage can be extended to reduce the number of visual sensors that need to be deployed. The coverage zone of a visual sensor in PTZ WVSN is composed of two regions, a Direct Coverage Region (DCR) and a PTZ Coverage Region (PTZCR). In the PTZCR, a visual sensor needs a mechanical pan-tilt-zoom operation to cover an object. This mechanical operation can take seconds, so the sensor might not be able to adjust the camera in time to capture the visual data. In this paper, for the first time, we study this PTZ time-aware PTZ WVSN deployment problem. We formulate this PTZ time-aware PTZ WVSN deployment problem as an optimization problem where the objective is to minimize the total visual sensor deployment cost so that each area is either covered in the DCR or in the PTZCR while considering the PTZ time constraint. The proposed Time Aware Coverage Zone (TACZ) model successfully captures the PTZ visual sensor coverage in terms of camera focal range, angle span zone coverage and camera PTZ time. Then a novel heuristic, called Time Aware Deployment with PTZ camera (TADPTZ) algorithm, is proposed to solve the problem. From our computational experiments, we found out that TACZ model outperforms the existing M coverage model under all network scenarios. In addition, as compared to the optimal solutions, the TACZ model is scalable and adaptable to the different PTZ time requirements when deploying large PTZ WVSNs.
Image-Aided Navigation Using Cooperative Binocular Stereopsis
2014-03-27
Global Postioning System . . . . . . . . . . . . . . . . . . . . . . . . . 1 IMU Inertial Measurement Unit...an intertial measurement unit ( IMU ). This technique capitalizes on an IMU’s ability to capture quick motion and the ability of GPS to constrain long...the sensor-aided IMU framework. Visual sensors provide a number of benefits, such as low cost and weight. These sensors are also able to measure
Availability Issues in Wireless Visual Sensor Networks
Costa, Daniel G.; Silva, Ivanovitch; Guedes, Luiz Affonso; Vasques, Francisco; Portugal, Paulo
2014-01-01
Wireless visual sensor networks have been considered for a large set of monitoring applications related with surveillance, tracking and multipurpose visual monitoring. When sensors are deployed over a monitored field, permanent faults may happen during the network lifetime, reducing the monitoring quality or rendering parts or the entire network unavailable. In a different way from scalar sensor networks, camera-enabled sensors collect information following a directional sensing model, which changes the notions of vicinity and redundancy. Moreover, visual source nodes may have different relevancies for the applications, according to the monitoring requirements and cameras' poses. In this paper we discuss the most relevant availability issues related to wireless visual sensor networks, addressing availability evaluation and enhancement. Such discussions are valuable when designing, deploying and managing wireless visual sensor networks, bringing significant contributions to these networks. PMID:24526301
Studies to design and develop improved remote manipulator systems
NASA Technical Reports Server (NTRS)
Hill, J. W.; Sword, A. J.
1973-01-01
Remote manipulator control considered is based on several levels of automatic supervision which derives manipulator commands from an analysis of sensor states and task requirements. Principle sensors are manipulator joint position, tactile, and currents. The tactile sensor states can be displayed visually in perspective or replicated in the operator's control handle of perceived by the automatic supervisor. Studies are reported on control organization, operator performance and system performance measures. Unusual hardware and software details are described.
Advanced Lighting Controls for Reducing Energy use and Cost in DoD Installations
2013-05-01
OccuSwitch Wireless is a room-based lighting control system employing dimmable light sources, occupancy and daylight sensors , wireless interconnection...combination of wireless and wired control solution for building-wide networked system that maximizes the use of daylight while improving visual...architecture of Hybrid ILDC. Architecture: The system features wireless connectivity among sensors and actuators within a zone and exploits wired
Novel Visual Sensor Coverage and Deployment in Time Aware PTZ Wireless Visual Sensor Networks
Yap, Florence G. H.; Yen, Hong-Hsu
2016-01-01
In this paper, we consider the visual sensor deployment algorithm in Pan-Tilt-Zoom (PTZ) Wireless Visual Sensor Networks (WVSNs). With PTZ capability, a sensor’s visual coverage can be extended to reduce the number of visual sensors that need to be deployed. The coverage zone of a visual sensor in PTZ WVSN is composed of two regions, a Direct Coverage Region (DCR) and a PTZ Coverage Region (PTZCR). In the PTZCR, a visual sensor needs a mechanical pan-tilt-zoom operation to cover an object. This mechanical operation can take seconds, so the sensor might not be able to adjust the camera in time to capture the visual data. In this paper, for the first time, we study this PTZ time-aware PTZ WVSN deployment problem. We formulate this PTZ time-aware PTZ WVSN deployment problem as an optimization problem where the objective is to minimize the total visual sensor deployment cost so that each area is either covered in the DCR or in the PTZCR while considering the PTZ time constraint. The proposed Time Aware Coverage Zone (TACZ) model successfully captures the PTZ visual sensor coverage in terms of camera focal range, angle span zone coverage and camera PTZ time. Then a novel heuristic, called Time Aware Deployment with PTZ camera (TADPTZ) algorithm, is proposed to solve the problem. From our computational experiments, we found out that TACZ model outperforms the existing M coverage model under all network scenarios. In addition, as compared to the optimal solutions, the TACZ model is scalable and adaptable to the different PTZ time requirements when deploying large PTZ WVSNs. PMID:28042829
Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision
NASA Astrophysics Data System (ADS)
Rojer, Alan S.; Schwartz, Eric L.
1991-02-01
Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for
Study on Impact Acoustic—Visual Sensor-Based Sorting of ELV Plastic Materials
Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu
2017-01-01
This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles’ (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling. PMID:28594341
Study on Impact Acoustic-Visual Sensor-Based Sorting of ELV Plastic Materials.
Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu
2017-06-08
This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles' (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua
2015-01-01
An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067
Audio-Visual Perception System for a Humanoid Robotic Head
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro
2014-01-01
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593
Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device
He, Xiang; Aloi, Daniel N.; Li, Jia
2015-01-01
Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design. PMID:26694387
Probabilistic Multi-Sensor Fusion Based Indoor Positioning System on a Mobile Device.
He, Xiang; Aloi, Daniel N; Li, Jia
2015-12-14
Nowadays, smart mobile devices include more and more sensors on board, such as motion sensors (accelerometer, gyroscope, magnetometer), wireless signal strength indicators (WiFi, Bluetooth, Zigbee), and visual sensors (LiDAR, camera). People have developed various indoor positioning techniques based on these sensors. In this paper, the probabilistic fusion of multiple sensors is investigated in a hidden Markov model (HMM) framework for mobile-device user-positioning. We propose a graph structure to store the model constructed by multiple sensors during the offline training phase, and a multimodal particle filter to seamlessly fuse the information during the online tracking phase. Based on our algorithm, we develop an indoor positioning system on the iOS platform. The experiments carried out in a typical indoor environment have shown promising results for our proposed algorithm and system design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, Traci L.; Larche, Michael R.; Denslow, Kayte M.
The Pacific Northwest National Laboratory (PNNL) located in Richland, Washington, hosted and administered Sensor Effectiveness Testing that allowed four different participants to demonstrate the NDE volumetric inspection technologies that were previously demonstrated during the Technology Screening session. This document provides a Sensor Effectiveness Testing report for the final part of Phase I of a three-phase NDE Technology Development Program designed to identify and mature a system or set of non-visual volumetric NDE technologies for Hanford DST primary liner bottom inspection. Phase I of the program will baseline the performance of current or emerging non-visual volumetric NDE technologies for their abilitymore » to detect and characterize primary liner bottom flaws, and identify candidate technologies for adaptation and maturation for Phase II of the program.« less
Research Trends in Wireless Visual Sensor Networks When Exploiting Prioritization
Costa, Daniel G.; Guedes, Luiz Affonso; Vasques, Francisco; Portugal, Paulo
2015-01-01
The development of wireless sensor networks for control and monitoring functions has created a vibrant investigation scenario, where many critical topics, such as communication efficiency and energy consumption, have been investigated in the past few years. However, when sensors are endowed with low-power cameras for visual monitoring, a new scope of challenges is raised, demanding new research efforts. In this context, the resource-constrained nature of sensor nodes has demanded the use of prioritization approaches as a practical mechanism to lower the transmission burden of visual data over wireless sensor networks. Many works in recent years have considered local-level prioritization parameters to enhance the overall performance of those networks, but global-level policies can potentially achieve better results in terms of visual monitoring efficiency. In this paper, we make a broad review of some recent works on priority-based optimizations in wireless visual sensor networks. Moreover, we envisage some research trends when exploiting prioritization, potentially fostering the development of promising optimizations for wireless sensor networks composed of visual sensors. PMID:25599425
Apparatus and Method for Assessing Vestibulo-Ocular Function
NASA Technical Reports Server (NTRS)
Shelhamer, Mark J. (Inventor)
2015-01-01
A system for assessing vestibulo-ocular function includes a motion sensor system adapted to be coupled to a user's head; a data processing system configured to communicate with the motion sensor system to receive the head-motion signals; a visual display system configured to communicate with the data processing system to receive image signals from the data processing system; and a gain control device arranged to be operated by the user and to communicate gain adjustment signals to the data processing system.
Nature as a model for biomimetic sensors
NASA Astrophysics Data System (ADS)
Bleckmann, H.
2012-04-01
Mammals, like humans, rely mainly on acoustic, visual and olfactory information. In addition, most also use tactile and thermal cues for object identification and spatial orientation. Most non-mammalian animals also possess a visual, acoustic and olfactory system. However, besides these systems they have developed a large variety of highly specialized sensors. For instance, pyrophilous insects use infrared organs for the detection of forest fires while boas, pythons and pit vipers sense the infrared radiation emitted by prey animals. All cartilaginous and bony fishes as well as some amphibians have a mechnaosensory lateral line. It is used for the detection of weak water motions and pressure gradients. For object detection and spatial orientation many species of nocturnal fish employ active electrolocation. This review describes certain aspects of the detection and processing of infrared, mechano- and electrosensory information. It will be shown that the study of these seemingly exotic sensory systems can lead to discoveries that are useful for the construction of technical sensors and artificial control systems.
Visual tracking strategies for intelligent vehicle highway systems
NASA Astrophysics Data System (ADS)
Smith, Christopher E.; Papanikolopoulos, Nikolaos P.; Brandt, Scott A.; Richards, Charles
1995-01-01
The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved. These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations. Of the sensors available, vision sensors provide information that is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system. In this paper we present robust techniques for intelligent vehicle-highway applications where computer vision plays a crucial role. In particular, we demonstrate that the controlled active vision framework can be utilized to provide a visual sensing modality to a traffic advisory system in order to increase the overall safety margin in a variety of common traffic situations. We have selected two application examples, vehicle tracking and pedestrian tracking, to demonstrate that the framework can provide precisely the type of information required to effectively manage the given situation.
Development of voice navigation system for the visually impaired by using IC tags.
Takatori, Norihiko; Nojima, Kengo; Matsumoto, Masashi; Yanashima, Kenji; Magatani, Kazushige
2006-01-01
There are about 300,000 visually impaired persons in Japan. Most of them are old persons and, cannot become skillful in using a white cane, even if they make effort to learn how to use a white cane. Therefore, some guiding system that supports the independent activities of the visually impaired are required. In this paper, we will describe about a developed white cane system that supports the independent walking of the visually impaired in the indoor space. This system is composed of colored navigation lines that include IC tags and an intelligent white cane that has a navigation computer. In our system colored navigation lines that are put on the floor of the target space from the start point to the destination and IC tags that are set at the landmark point are used for indication of the route to the destination. The white cane has a color sensor, an IC tag transceiver and a computer system that includes a voice processor. This white cane senses the navigation line that has target color by a color sensor. When a color sensor finds the target color, the white cane informs a white cane user that he/she is on the navigation line by vibration. So, only following this vibration, the user can reach the destination. However, at some landmark points, guidance is necessary. At these points, an IC tag is set under the navigation line. The cane makes communication with the tag and informs the user about the land mark pint by pre recorded voice. Ten normal subjects who were blindfolded were tested with our developed system. All of them could walk along navigation line. And the IC tag information system worked well. Therefore, we have concluded that our system will be a very valuable one to support activities of the visually impaired.
Ellingson, Roger M; Oken, Barry
2010-01-01
Report contains the design overview and key performance measurements demonstrating the feasibility of generating and recording ambulatory visual stimulus evoked potentials using the previously reported custom Complementary and Alternative Medicine physiologic data collection and monitoring system, CAMAS. The methods used to generate visual stimuli on a PDA device and the design of an optical coupling device to convert the display to an electrical waveform which is recorded by the CAMAS base unit are presented. The optical sensor signal, synchronized to the visual stimulus emulates the brain's synchronized EEG signal input to CAMAS normally reviewed for the evoked potential response. Most importantly, the PDA also sends a marker message over the wireless Bluetooth connection to the CAMAS base unit synchronized to the visual stimulus which is the critical averaging reference component to obtain VEP results. Results show the variance in the latency of the wireless marker messaging link is consistent enough to support the generation and recording of visual evoked potentials. The averaged sensor waveforms at multiple CPU speeds are presented and demonstrate suitability of the Bluetooth interface for portable ambulatory visual evoked potential implementation on our CAMAS platform.
Pastorello, Gilberto Z.; Sanchez-Azofeifa, G. Arturo; Nascimento, Mario A.
2011-01-01
Ecosystems monitoring is essential to properly understand their development and the effects of events, both climatological and anthropological in nature. The amount of data used in these assessments is increasing at very high rates. This is due to increasing availability of sensing systems and the development of new techniques to analyze sensor data. The Enviro-Net Project encompasses several of such sensor system deployments across five countries in the Americas. These deployments use a few different ground-based sensor systems, installed at different heights monitoring the conditions in tropical dry forests over long periods of time. This paper presents our experience in deploying and maintaining these systems, retrieving and pre-processing the data, and describes the Web portal developed to help with data management, visualization and analysis. PMID:22163965
Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.
Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue
2018-05-25
A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.
2005 Science and Technology for Chem-Bio Information Systems (S and T CBIS). Volume 2 - Wednesday
2005-10-28
historical example of using both an audible and visual alerting method. In April 1775, Revere hung two lanterns in the bell-tower of Christ Church in...individual building systems, outdoor systems, telephone notification systems and a network of alert sensors . Fire protection systems are often... sensor , be it a pushbutton at a gate, a wireless “panic” button or a CBRNE detector, may be programmed to trigger notifications without further
Central Asia Water (CAWa) - A visualization platform for hydro-meteorological sensor data
NASA Astrophysics Data System (ADS)
Stender, Vivien; Schroeder, Matthias; Wächter, Joachim
2014-05-01
Water is an indispensable necessity of life for people in the whole world. In central Asia, water is the key factor for economic development, but is already a narrow resource in this region. In fact of climate change, the water problem handling will be a big challenge for the future. The regional research Network "Central Asia Water" (CAWa) aims at providing a scientific basis for transnational water resources management for the five Central Asia States Kyrgyzstan, Uzbekistan, Tajikistan, Turkmenistan and Kazakhstan. CAWa is part of the Central Asia Water Initiative (also known as the Berlin Process) which was launched by the Federal Foreign Office on 1 April 2008 at the "Water Unites" conference in Berlin. To produce future scenarios and strategies for sustainable water management, data on water reserves and the use of water in Central Asia must therefore be collected consistently across the region. Hydro-meteorological stations equipped with sophisticated sensors are installed in Central Asia and send their data via real-time satellite communication to the operation centre of the monitoring network and to the participating National Hydro-meteorological Services.[1] The challenge for CAWa is to integrate the whole aspects of data management, data workflows, data modeling and visualizations in a proper design of a monitoring infrastructure. The use of standardized interfaces to support data transfer and interoperability is essential in CAWa. An uniform treatment of sensor data can be realized by the OGC Sensor Web Enablement (SWE) , which makes a number of standards and interface definitions available: Observation & Measurement (O&M) model for the description of observations and measurements, Sensor Model Language (SensorML) for the description of sensor systems, Sensor Observation Service (SOS) for obtaining sensor observations, Sensor Planning Service (SPS) for tasking sensors, Web Notification Service (WNS) for asynchronous dialogues and Sensor Alert Service (SAS) for sending alerts. An OpenSource web-platform bundles the data, provided by the SWE web services of the hydro-meteorological stations, and provides tools for data visualization and data access. The visualization tool was implemented by using OpenSource tools like GeoExt/ExtJS and OpenLayers. Using the application the user can query the relevant sensor data, select parameter and time period, visualize and finally download the data. [1] http://www.cawa-project.net
System for critical infrastructure security based on multispectral observation-detection module
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław
2013-10-01
Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.
Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M
2015-01-01
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
Multispectral photoacoustic tomography for detection of small tumors inside biological tissues
NASA Astrophysics Data System (ADS)
Hirasawa, Takeshi; Okawa, Shinpei; Tsujita, Kazuhiro; Kushibiki, Toshihiro; Fujita, Masanori; Urano, Yasuteru; Ishihara, Miya
2018-02-01
Visualization of small tumors inside biological tissue is important in cancer treatment because that promotes accurate surgical resection and enables therapeutic effect monitoring. For sensitive detection of tumor, we have been developing photoacoustic (PA) imaging technique to visualize tumor-specific contrast agents, and have already succeeded to image a subcutaneous tumor of a mouse using the contrast agents. To image tumors inside biological tissues, extension of imaging depth and improvement of sensitivity were required. In this study, to extend imaging depth, we developed a PA tomography (PAT) system that can image entire cross section of mice. To improve sensitivity, we discussed the use of the P(VDF-TrFE) linear array acoustic sensor that can detect PA signals with wide ranges of frequencies. Because PA signals produced from low absorbance optical absorbers shifts to low frequency, we hypothesized that the detection of low frequency PA signals improves sensitivity to low absorbance optical absorbers. We developed a PAT system with both a PZT linear array acoustic sensor and the P(VDF-TrFE) sensor, and performed experiment using tissue-mimicking phantoms to evaluate lower detection limits of absorbance. As a result, PAT images calculated from low frequency components of PA signals detected by the P(VDF-TrFE) sensor could visualize optical absorbers with lower absorbance.
Visual Image Sensor Organ Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.
2014-01-01
This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.
Multi-Source Sensor Fusion for Small Unmanned Aircraft Systems Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Cook, Brandon; Cohen, Kelly
2017-01-01
As the applications for using small Unmanned Aircraft Systems (sUAS) beyond visual line of sight (BVLOS) continue to grow in the coming years, it is imperative that intelligent sensor fusion techniques be explored. In BVLOS scenarios the vehicle position must accurately be tracked over time to ensure no two vehicles collide with one another, no vehicle crashes into surrounding structures, and to identify off-nominal scenarios. Therefore, in this study an intelligent systems approach is used to estimate the position of sUAS given a variety of sensor platforms, including, GPS, radar, and on-board detection hardware. Common research challenges include, asynchronous sensor rates and sensor reliability. In an effort to realize these challenges, techniques such as a Maximum a Posteriori estimation and a Fuzzy Logic based sensor confidence determination are used.
Real-Time Workload Monitoring: Improving Cognitive Process Models
2010-10-01
Research or comparable systems with similar technical properties having been made available on the market by now. Remote sensors lack the required visual...questionnaire. This includes age, gender, alcohol and nicotine consumption, visual status, sleep during the last three days and last night, sportive
NASA Astrophysics Data System (ADS)
York, Andrew M.
2000-11-01
The ever increasing sophistication of reconnaissance sensors reinforces the importance of timely, accurate, and equally sophisticated mission planning capabilities. Precision targeting and zero-tolerance for collateral damage and civilian casualties, stress the need for accuracy and timeliness. Recent events have highlighted the need for improvement in current planning procedures and systems. Annotating printed maps takes time and does not allow flexibility for rapid changes required in today's conflicts. We must give aircrew the ability to accurately navigate their aircraft to an area of interest, correctly position the sensor to obtain the required sensor coverage, adapt missions as required, and ensure mission success. The growth in automated mission planning system capability and the expansion of those systems to include dedicated and integrated reconnaissance modules, helps to overcome current limitations. Mission planning systems, coupled with extensive integrated visualization capabilities, allow aircrew to not only plan accurately and quickly, but know precisely when they will locate the target and visualize what the sensor will see during its operation. This paper will provide a broad overview of the current capabilities and describe how automated mission planning and visualization systems can improve and enhance the reconnaissance planning process and contribute to mission success. Think about the ultimate objective of the reconnaissance mission as we consider areas that technology can offer improvement. As we briefly review the fundamentals, remember where and how TAC RECCE systems will be used. Try to put yourself in the mindset of those who are on the front lines, working long hours at increasingly demanding tasks, trying to become familiar with new operating areas and equipment, while striving to minimize risk and optimize mission success. Technical advancements that can reduce the TAC RECCE timeline, simplify operations and instill Warfighter confidence, ultimately improve the desired outcome.
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller.
Lopez-Franco, Carlos; Gomez-Avila, Javier; Alanis, Alma Y; Arana-Daniel, Nancy; Villaseñor, Carlos
2017-08-12
In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results.
Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller
Lopez-Franco, Carlos; Alanis, Alma Y.; Arana-Daniel, Nancy; Villaseñor, Carlos
2017-01-01
In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results. PMID:28805689
Driver Distraction Using Visual-Based Sensors and Algorithms.
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-10-28
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.
Driver Distraction Using Visual-Based Sensors and Algorithms
Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén
2016-01-01
Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. PMID:27801822
Distributed Stress Sensing and Non-Destructive Tests Using Mechanoluminescence Materials
NASA Astrophysics Data System (ADS)
Rahimi, Mohammad Reza
Rapid aging of infrastructure systems is currently pervasive in the US and the anticipated cost until 2020 for rehabilitation of aging lifeline will reach 3.6 trillion US dollars (ASCE 2013). Reliable condition or serviceability assessment is critically important in decision-making for economic and timely maintenance of the infrastructure systems. Advanced sensors and nondestructive test (NDT) methods are the key technologies for structural health monitoring (SHM) applications that can provide information on the current state of structures. There are many traditional sensors and NDT methods, for examples, strain gauges, ultrasound, radiography and other X-ray, etc. to detect any defect on the infrastructure. Considering that civil infrastructure is typically large-scale and exhibits complex behavior, estimation of structural conditions by the local sensing and NDT methods is a challenging task. Non-contact and distributed (or full-field) sensing and NDT method are desirable that can provide rich information on the civil infrastructure's state. Materials with the ability of emitting light, especially in the visible range, are named as luminescent materials. Mechanoluminescence (ML) phenomenon is the light emission from luminescent materials as a response of an induced mechanical stress. ML materials offer new opportunities for SHM that can directly visualize the stress and crack distributions on the surface of structures through ML light emission. Although material research for ML phenomena have been made substantially, applications of the ML sensors to full-field stress and crack visualization are still at infant stage and have yet to be full-fledged. Moreover, practical applications of the ML sensors for SHM of civil infrastructure have difficulties since numerous challenging problems (e.g. environmental effect) arise in actual applications. In order to realize a practical SHM system employing ML sensors, more research needs to be conducted, for examples, fundamental understandings of physics of ML phenomenon, method for quantitative stress measurements, calibration method for ML sensors, improvement of sensitivity, optimal manufacturing and design of ML sensors, environmental effects of ML phenomenon (e.g. temperature), image processing and analysis, etc. In this research, fundamental ML phenomena of two most promising ML sensing materials were experimentally studied and a methodology for full-field quantitative strain measurements, for the first time, was proposed along with a standardized calibration method. Characteristics and behavior of ML composites and thin films coated on the structure have been studied under various material tests including compression, tension, pure shear, bending, etc. In addition, ML emission sensitivity to the manufacturing parameters and experimental conditions was addressed in order to find optimal design the ML sensor. A phenomenological stress-optics transduction model for predicting the ML light intensity from a thin-film ML coating sensor subjected to in-plane stresses was proposed. A new full-field quantitative strain measuring methodology by ML thin film sensor was developed, for the first time, in order to visualize and measure the strain field. The results from the ML sensor were compared and verified by finite element simulation results. For NDT applications of ML sensors, experimental tests were conducted to visualize the cracks on structural surfaces and detect damages on structural components. In summary, this research proposes and realizes a new distributed stress sensor and NDT method using ML sensing materials. The proposed method is experimentally validated to be effective for stress measurement and crack visualizations. Successful completion of this research provides a leap toward a commercial light intensity-based optic sensor to be used as a new full-field stress measurement technology and NDT method.
Unmanned Ground Vehicles for Integrated Force Protection
2004-04-01
employed. 2 Force Protection 18 MAR 02 Security Posts Squad Laptop Fire Tm Ldr Wearable Computers OP/LP Def Fight Psn SRT Sensors USA, USMC, Allied...visual systems. Attaching sensors and response devices on a monorail proved to be much more technically challenging than expected. Film producers and...facilitate experimentation with weapon aiming and firing techniques from the MRHA. grated Marsupial Delivery System was developed to transport smaller
Code of Federal Regulations, 2014 CFR
2014-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
Code of Federal Regulations, 2013 CFR
2013-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., or to temperature simulation devices. (vi) Conduct a visual inspection of each sensor every quarter... sensor values with electronic signal simulations or via relative accuracy testing. (v) Perform accuracy... values with electronic signal simulations or with values obtained via relative accuracy testing. (vi...
The rapid terrain visualization interferometric synthetic aperture radar sensor
NASA Astrophysics Data System (ADS)
Graham, Robert H.; Bickel, Douglas L.; Hensley, William H.
2003-11-01
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor is currently being operated by Sandia National Laboratories for the Joint Precision Strike Demonstration (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieves better than DTED Level IV position accuracy in near real-time. The system is being flown on a deHavilland DHC-7 Army aircraft. This paper outlines some of the technologies used in the design of the system, discusses the performance, and will discuss operational issues. In addition, we will show results from recent flight tests, including high accuracy maps taken of the San Diego area.
Reconfigurable Auditory-Visual Display
NASA Technical Reports Server (NTRS)
Begault, Durand R. (Inventor); Anderson, Mark R. (Inventor); McClain, Bryan (Inventor); Miller, Joel D. (Inventor)
2008-01-01
System and method for visual and audible communication between a central operator and N mobile communicators (N greater than or equal to 2), including an operator transceiver and interface, configured to receive and display, for the operator, visually perceptible and audibly perceptible signals from each of the mobile communicators. The interface (1) presents an audible signal from each communicator as if the audible signal is received from a different location relative to the operator and (2) allows the operator to select, to assign priority to, and to display, the visual signals and the audible signals received from a specified communicator. Each communicator has an associated signal transmitter that is configured to transmit at least one of the visual signals and the audio signal associated with the communicator, where at least one of the signal transmitters includes at least one sensor that senses and transmits a sensor value representing a selected environmental or physiological parameter associated with the communicator.
Seamless Tracing of Human Behavior Using Complementary Wearable and House-Embedded Sensors
Augustyniak, Piotr; Smoleń, Magdalena; Mikrut, Zbigniew; Kańtoch, Eliasz
2014-01-01
This paper presents a multimodal system for seamless surveillance of elderly people in their living environment. The system uses simultaneously a wearable sensor network for each individual and premise-embedded sensors specific for each environment. The paper demonstrates the benefits of using complementary information from two types of mobility sensors: visual flow-based image analysis and an accelerometer-based wearable network. The paper provides results for indoor recognition of several elementary poses and outdoor recognition of complex movements. Instead of complete system description, particular attention was drawn to a polar histogram-based method of visual pose recognition, complementary use and synchronization of the data from wearable and premise-embedded networks and an automatic danger detection algorithm driven by two premise- and subject-related databases. The novelty of our approach also consists in feeding the databases with real-life recordings from the subject, and in using the dynamic time-warping algorithm for measurements of distance between actions represented as elementary poses in behavioral records. The main results of testing our method include: 95.5% accuracy of elementary pose recognition by the video system, 96.7% accuracy of elementary pose recognition by the accelerometer-based system, 98.9% accuracy of elementary pose recognition by the combined accelerometer and video-based system, and 80% accuracy of complex outdoor activity recognition by the accelerometer-based wearable system. PMID:24787640
Microfabricated Hydrogen Sensor Technology for Aerospace and Commercial Applications
NASA Technical Reports Server (NTRS)
Hunter, Gary W.; Bickford, R. L.; Jansa, E. D.; Makel, D. B.; Liu, C. C.; Wu, Q. H.; Powers, W. T.
1994-01-01
Leaks on the Space Shuttle while on the Launch Pad have generated interest in hydrogen leak monitoring technology. An effective leak monitoring system requires reliable hydrogen sensors, hardware, and software to monitor the sensors. The system should process the sensor outputs and provide real-time leak monitoring information to the operator. This paper discusses the progress in developing such a complete leak monitoring system. Advanced microfabricated hydrogen sensors are being fabricated at Case Western Reserve University (CWRU) and tested at NASA Lewis Research Center (LeRC) and Gencorp Aerojet (Aerojet). Changes in the hydrogen concentrations are detected using a PdAg on silicon Schottky diode structure. Sensor temperature control is achieved with a temperature sensor and heater fabricated onto the sensor chip. Results of the characterization of these sensors are presented. These sensors can detect low concentrations of hydrogen in inert environments with high sensitivity and quick response time. Aerojet is developing the hardware and software for a multipoint leak monitoring system designed to provide leak source and magnitude information in real time. The monitoring system processes data from the hydrogen sensors and presents the operator with a visual indication of the leak location and magnitude. Work has commenced on integrating the NASA LeRC-CWRU hydrogen sensors with the Aerojet designed monitoring system. Although the leak monitoring system was designed for hydrogen propulsion systems, the possible applications of this monitoring system are wide ranged. Possible commercialization of the system will also be discussed.
A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles
NASA Technical Reports Server (NTRS)
Delgado, Frank; Abernathy, Mike
2004-01-01
A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.
Martinaitis, Arnas; Daunoraviciene, Kristina
2018-05-18
Long sitting causes many health problems for people. Healthy sitting monitoring systems, like real-time pressure distribution measuring, is in high demand and many methods of posture recognition were developed. Such systems are usually expensive and hardly available for the regular user. The aim of study is to develop low cost but sensitive enough pressure sensors and posture monitoring system. New self-made pressure sensors have been developed and tested, and prototype of pressure distribution measuring system was designed. Sensors measured at average noise amplitude of a = 56 mV (1.12%), average variation in sequential measurements of the same sensor s = 17 mV (0.34%). Signal variability between sensors averaged at 100 mV (2.0%). Weight to signal dependency graph was measured and hysteresis calculated. Results suggested the use of total sixteen sensors for posture monitoring system with accuracy of < 1.5% after relaxation and repeatability of around 2%. Results demonstrate that hand-made sensor sensitivity and repeatability are acceptable for posture monitoring, and it is possible to build low cost pressure distribution measurement system with graphical visualization without expensive equipment or complicated software.
Code of Federal Regulations, 2013 CFR
2013-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and after every deviation. Accuracy audit methods include comparisons of sensor values with electronic signal simulations or...
A new SMART sensing system for aerospace structures
NASA Astrophysics Data System (ADS)
Zhang, David C.; Yu, Pin; Beard, Shawn; Qing, Peter; Kumar, Amrita; Chang, Fu-Kuo
2007-04-01
It is essential to ensure the safety and reliability of in-service structures such as unmanned vehicles by detecting structural cracking, corrosion, delamination, material degradation and other types of damage in time. Utilization of an integrated sensor network system can enable automatic inspection of such damages ultimately. Using a built-in network of actuators and sensors, Acellent is providing tools for advanced structural diagnostics. Acellent's integrated structural health monitoring system consists of an actuator/sensor network, supporting signal generation and data acquisition hardware, and data processing, visualization and analysis software. This paper describes the various features of Acellent's latest SMART sensing system. The new system is USB-based and is ultra-portable using the state-of-the-art technology, while delivering many functions such as system self-diagnosis, sensor diagnosis, through-transmission mode and pulse-echo mode of operation and temperature measurement. Performance of the new system was evaluated for assessment of damage in composite structures.
Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.
2016-01-01
Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.
Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks
ERIC Educational Resources Information Center
Yu, Chao
2013-01-01
In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
An Inflatable and Wearable Wireless System for Making 32-Channel Electroencephalogram Measurements.
Yu, Yi-Hsin; Lu, Shao-Wei; Chuang, Chun-Hsiang; King, Jung-Tai; Chang, Che-Lun; Chen, Shi-An; Chen, Sheng-Fu; Lin, Chin-Teng
2016-07-01
Potable electroencephalography (EEG) devices have become critical for important research. They have various applications, such as in brain-computer interfaces (BCI). Numerous recent investigations have focused on the development of dry sensors, but few concern the simultaneous attachment of high-density dry sensors to different regions of the scalp to receive qualified EEG signals from hairy sites. An inflatable and wearable wireless 32-channel EEG device was designed, prototyped, and experimentally validated for making EEG signal measurements; it incorporates spring-loaded dry sensors and a novel gasbag design to solve the problem of interference by hair. The cap is ventilated and incorporates a circuit board and battery with a high-tolerance wireless (Bluetooth) protocol and low power consumption characteristics. The proposed system provides a 500/250 Hz sampling rate, and 24 bit EEG data to meet the BCI system data requirement. Experimental results prove that the proposed EEG system is effective in measuring audio event-related potential, measuring visual event-related potential, and rapid serial visual presentation. Results of this work demonstrate that the proposed EEG cap system performs well in making EEG measurements and is feasible for practical applications.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-10-21
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.
Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan
2015-01-01
The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347
Therapeutic hypertension system based on a microbreathing pressure sensor system.
Diao, Ziji; Liu, Hongying; Zhu, Lan; Gao, Xiaoqiang; Zhao, Suwen; Pi, Xitian; Zheng, Xiaolin
2011-01-01
A novel therapeutic system for the treatment of hypertension was developed on the basis of a slow-breath training mechanism, using a microbreathing pressure sensor device for the detection of human respiratory signals attached to the abdomen. The system utilizes a single-chip AT89C51 microcomputer as a core processor, programmed by Microsoft Visual C++6.0 to communicate with a PC via a full-speed PDIUSBD12 interface chip. The programming is based on a slow-breath guided algorithm in which the respiratory signal serves as a physiological feedback parameter. Inhalation and exhalation by the subject is guided by music signals. Our study indicates that this microbreathing sensor system may assist in slow-breath training and may help to decrease blood pressure.
Plantar pressure cartography reconstruction from 3 sensors.
Abou Ghaida, Hussein; Mottet, Serge; Goujon, Jean-Marc
2014-01-01
Foot problem diagnosis is often made by using pressure mapping systems, unfortunately located and used in the laboratories. In the context of e-health and telemedicine for home monitoring of patients having foot problems, our focus is to present an acceptable system for daily use. We developed an ambulatory instrumented insole using 3 pressures sensors to visualize plantar pressure cartographies. We show that a standard insole with fixed sensor position could be used for different foot sizes. The results show an average error measured at each pixel of 0.01 daN, with a standard deviation of 0.005 daN.
Systems and methods for analyzing building operations sensor data
Mezic, Igor; Eisenhower, Bryan A.
2015-05-26
Systems and methods are disclosed for analyzing building sensor information and decomposing the information therein to a more manageable and more useful form. Certain embodiments integrate energy-based and spectral-based analysis methods with parameter sampling and uncertainty/sensitivity analysis to achieve a more comprehensive perspective of building behavior. The results of this analysis may be presented to a user via a plurality of visualizations and/or used to automatically adjust certain building operations. In certain embodiments, advanced spectral techniques, including Koopman-based operations, are employed to discern features from the collected building sensor data.
A Hydrogen Leak Detection System for Aerospace and Commercial Applications
NASA Technical Reports Server (NTRS)
Hunter, Gary W.; Makel, D. B.; Jansa, E. D.; Patterson, G.; Cova, P. J.; Liu, C. C.; Wu, Q. H.; Powers, W. T.
1995-01-01
Leaks on the space shuttle while on the launch pad have generated interest in hydrogen leak monitoring technology. Microfabricated hydrogen sensors are being fabricated at Case Western Reserve University (CWRU) and tested at NASA Lewis Research Center (LeRC). These sensors have been integrated into hardware and software designed by Aerojet. This complete system allows for multipoint leak monitoring designed to provide leak source and magnitude information in real time. The monitoring system processes data from the hydrogen sensors and presents the operator with a visual indication of the leak location and magnitude. Although the leak monitoring system was designed for hydrogen propulsion systems, the possible applications of this monitoring system are wide ranged. This system is in operation in an automotive application which requires high sensitivity to hydrogen.
40 CFR 60.482-2 - Standards: Pumps in light liquid service.
Code of Federal Regulations, 2011 CFR
2011-07-01
...; or (ii) Equipped with a barrier fluid degassing reservoir that is routed to a process or fuel gas... in VOC service. (3) Each barrier fluid system is equipped with a sensor that will detect failure of...) Designate the visual indications of liquids dripping as a leak. (5)(i) Each sensor as described in paragraph...
40 CFR 60.482-2 - Standards: Pumps in light liquid service.
Code of Federal Regulations, 2010 CFR
2010-07-01
...; or (ii) Equipped with a barrier fluid degassing reservoir that is routed to a process or fuel gas... in VOC service. (3) Each barrier fluid system is equipped with a sensor that will detect failure of...) Designate the visual indications of liquids dripping as a leak. (5)(i) Each sensor as described in paragraph...
Visual Sensing for Urban Flood Monitoring
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201
Audio-Visual Situational Awareness for General Aviation Pilots
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Lodha, Suresh K.; Clancy, Daniel (Technical Monitor)
2001-01-01
Weather is one of the major causes of general aviation accidents. Researchers are addressing this problem from various perspectives including improving meteorological forecasting techniques, collecting additional weather data automatically via on-board sensors and "flight" modems, and improving weather data dissemination and presentation. We approach the problem from the improved presentation perspective and propose weather visualization and interaction methods tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilot's situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.
Development of Cloud-Based UAV Monitoring and Management System
Itkin, Mason; Kim, Mihui; Park, Younghee
2016-01-01
Unmanned aerial vehicles (UAVs) are an emerging technology with the potential to revolutionize commercial industries and the public domain outside of the military. UAVs would be able to speed up rescue and recovery operations from natural disasters and can be used for autonomous delivery systems (e.g., Amazon Prime Air). An increase in the number of active UAV systems in dense urban areas is attributed to an influx of UAV hobbyists and commercial multi-UAV systems. As airspace for UAV flight becomes more limited, it is important to monitor and manage many UAV systems using modern collision avoidance techniques. In this paper, we propose a cloud-based web application that provides real-time flight monitoring and management for UAVs. For each connected UAV, detailed UAV sensor readings from the accelerometer, GPS sensor, ultrasonic sensor and visual position cameras are provided along with status reports from the smaller internal components of UAVs (i.e., motor and battery). The dynamic map overlay visualizes active flight paths and current UAV locations, allowing the user to monitor all aircrafts easily. Our system detects and prevents potential collisions by automatically adjusting UAV flight paths and then alerting users to the change. We develop our proposed system and demonstrate its feasibility and performances through simulation. PMID:27854267
Development of Cloud-Based UAV Monitoring and Management System.
Itkin, Mason; Kim, Mihui; Park, Younghee
2016-11-15
Unmanned aerial vehicles (UAVs) are an emerging technology with the potential to revolutionize commercial industries and the public domain outside of the military. UAVs would be able to speed up rescue and recovery operations from natural disasters and can be used for autonomous delivery systems (e.g., Amazon Prime Air). An increase in the number of active UAV systems in dense urban areas is attributed to an influx of UAV hobbyists and commercial multi-UAV systems. As airspace for UAV flight becomes more limited, it is important to monitor and manage many UAV systems using modern collision avoidance techniques. In this paper, we propose a cloud-based web application that provides real-time flight monitoring and management for UAVs. For each connected UAV, detailed UAV sensor readings from the accelerometer, GPS sensor, ultrasonic sensor and visual position cameras are provided along with status reports from the smaller internal components of UAVs (i.e., motor and battery). The dynamic map overlay visualizes active flight paths and current UAV locations, allowing the user to monitor all aircrafts easily. Our system detects and prevents potential collisions by automatically adjusting UAV flight paths and then alerting users to the change. We develop our proposed system and demonstrate its feasibility and performances through simulation.
Localization and Tracking of Implantable Biomedical Sensors
Umay, Ilknur; Fidan, Barış; Barshan, Billur
2017-01-01
Implantable sensor systems are effective tools for biomedical diagnosis, visualization and treatment of various health conditions, attracting the interest of researchers, as well as healthcare practitioners. These systems efficiently and conveniently provide essential data of the body part being diagnosed, such as gastrointestinal (temperature, pH, pressure) parameter values, blood glucose and pressure levels and electrocardiogram data. Such data are first transmitted from the implantable sensor units to an external receiver node or network and then to a central monitoring and control (computer) unit for analysis, diagnosis and/or treatment. Implantable sensor units are typically in the form of mobile microrobotic capsules or implanted stationary (body-fixed) units. In particular, capsule-based systems have attracted significant research interest recently, with a variety of applications, including endoscopy, microsurgery, drug delivery and biopsy. In such implantable sensor systems, one of the most challenging problems is the accurate localization and tracking of the microrobotic sensor unit (e.g., robotic capsule) inside the human body. This article presents a literature review of the existing localization and tracking techniques for robotic implantable sensor systems with their merits and limitations and possible solutions of the proposed localization methods. The article also provides a brief discussion on the connection and cooperation of such techniques with wearable biomedical sensor systems. PMID:28335384
Augmented reality visualization of deformable tubular structures for surgical simulation.
Ferrari, Vincenzo; Viglialoro, Rosanna Maria; Nicoli, Paola; Cutolo, Fabrizio; Condino, Sara; Carbone, Marina; Siesto, Mentore; Ferrari, Mauro
2016-06-01
Surgical simulation based on augmented reality (AR), mixing the benefits of physical and virtual simulation, represents a step forward in surgical training. However, available systems are unable to update the virtual anatomy following deformations impressed on actual anatomy. A proof-of-concept solution is described providing AR visualization of hidden deformable tubular structures using nitinol tubes sensorized with electromagnetic sensors. This system was tested in vitro on a setup comprised of sensorized cystic, left and right hepatic, and proper hepatic arteries. In the trial session, the surgeon deformed the tubular structures with surgical forceps in 10 positions. The mean, standard deviation, and maximum misalignment between virtual and real arteries were 0.35, 0.22, and 0.99 mm, respectively. The alignment accuracy obtained demonstrates the feasibility of the approach, which can be adopted in advanced AR simulations, in particular as an aid to the identification and isolation of tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gaviña, J. R.; Uy, F. A.; Carreon, J. D.
2017-06-01
There are over 8000 bridges in the Philippines today according to the Department of Public Works and Highways (DPWH). Currently, visual inspection is the most common practice in monitoring the structural integrity of bridges. However, visual inspections have proven to be insufficient in determining the actual health or condition of a bridge. Structural Health Monitoring (SHM) aims to give, in real-time, a diagnosis of the actual condition of the bridge. In this study, SmartBridge Sensor Nodes were installed on an existing concrete bridge with American Association of State Highway and Transportation Officials (AASHTO) Type IV Girders to gather vibration of the elements of the bridge. Also, standards on the effective installation of SmartBridge Sensor Nodes, such as location and orientation was determined. Acceleration readings from the sensor were then uploaded to a server, wherein they are monitored against certain thresholds, from which, the health of the bridge will be derived. Final output will be a portal or webpage wherein the information, health, and acceleration readings of the bridge will be available for viewing. With levels of access set for different types of users, the main users will have access to download data and reports. Data transmission and webpage access are available online, making the SHM system wireless.
Design and Development of a Mobile Sensor Based the Blind Assistance Wayfinding System
NASA Astrophysics Data System (ADS)
Barati, F.; Delavar, M. R.
2015-12-01
The blind and visually impaired people are facing a number of challenges in their daily life. One of the major challenges is finding their way both indoor and outdoor. For this reason, routing and navigation independently, especially in urban areas are important for the blind. Most of the blind undertake route finding and navigation with the help of a guide. In addition, other tools such as a cane, guide dog or electronic aids are used by the blind. However, in some cases these aids are not efficient enough in a wayfinding around obstacles and dangerous areas for the blind. As a result, the need to develop effective methods as decision support using a non-visual media is leading to improve quality of life for the blind through their increased mobility and independence. In this study, we designed and implemented an outdoor mobile sensor-based wayfinding system for the blind. The objectives of this study are to guide the blind for the obstacle recognition and the design and implementation of a wayfinding and navigation mobile sensor system for them. In this study an ultrasonic sensor is used to detect obstacles and GPS is employed for positioning and navigation in the wayfinding. This type of ultrasonic sensor measures the interval between sending waves and receiving the echo signals with respect to the speed of sound in the environment to estimate the distance to the obstacles. In this study the coordinates and characteristics of all the obstacles in the study area are already stored in a GIS database. All of these obstacles were labeled on the map. The ultrasonic sensor designed and constructed in this study has the ability to detect the obstacles in a distance of 2cm to 400cm. The implementation and the results obtained from the interview of a number of blind persons who employed the sensor verified that the designed mobile sensor system for wayfinding was very satisfactory.
A new terminal guidance sensor system for asteroid intercept or rendezvous missions
NASA Astrophysics Data System (ADS)
Lyzhoft, Joshua; Basart, John; Wie, Bong
2016-02-01
This paper presents the initial conceptual study results of a new terminal guidance sensor system for asteroid intercept or rendezvous missions, which explores the use of visual, infrared, and radar devices. As was demonstrated by NASA's Deep Impact mission, visual cameras can be effectively utilized for hypervelocity intercept terminal guidance for a 5 kilometer target. Other systems such as Raytheon's EKV (Exoatmospheric Kill Vehicle) employ a different scheme that utilizes infrared target information to intercept ballistic missiles. Another example that uses infrared information is the NEOWISE telescope, which is used for asteroid detection and tracking. This paper describes the signal-to-noise ratio estimation problem for infrared sensors, minimum and maximum range of detection, and computational validation using GPU accelerated simulations. Small targets (50-100 m in diameter) are considered, and scaled polyhedron models of known objects, such as the Rosetta mission's Comet 67P/Churyumov-Gerasimenko, 101,955 Bennu, target of the OSIRIS-REx mission, and asteroid 433 Eros, are utilized. A parallelized ray tracing algorithm to simulate realistic surface-to-surface shadowing of a given celestial body is developed. By using the simulated models and parameters given from the formulation of the different sensors, impact mission scenarios are used to verify the feasibility for intercepting a small target.
What can neuromorphic event-driven precise timing add to spike-based pattern recognition?
Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad
2015-03-01
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.
Visual Image Sensor Organ Replacement: Implementation
NASA Technical Reports Server (NTRS)
Maluf, A. David (Inventor)
2011-01-01
Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.
Degraded visual environment image/video quality metrics
NASA Astrophysics Data System (ADS)
Baumgartner, Dustin D.; Brown, Jeremy B.; Jacobs, Eddie L.; Schachter, Bruce J.
2014-06-01
A number of image quality metrics (IQMs) and video quality metrics (VQMs) have been proposed in the literature for evaluating techniques and systems for mitigating degraded visual environments. Some require both pristine and corrupted imagery. Others require patterned target boards in the scene. None of these metrics relates well to the task of landing a helicopter in conditions such as a brownout dust cloud. We have developed and used a variety of IQMs and VQMs related to the pilot's ability to detect hazards in the scene and to maintain situational awareness. Some of these metrics can be made agnostic to sensor type. Not only are the metrics suitable for evaluating algorithm and sensor variation, they are also suitable for choosing the most cost effective solution to improve operating conditions in degraded visual environments.
Improving Aviation Safety with information Visualization: A Flight Simulation Study
NASA Technical Reports Server (NTRS)
Aragon, Cecilia R.; Hearst, Marti
2005-01-01
Many aircraft accidents each year are caused by encounters with invisible airflow hazards. Recent advances in aviation sensor technology offer the potential for aircraft-based sensors that can gather large amounts of airflow velocity data in real-time. With this influx of data comes the need to study how best to present it to the pilot - a cognitively overloaded user focused on a primary task other than that of information visualization. In this paper, we present the results of a usability study of an airflow hazard visualization system that significantly reduced the crash rate among experienced helicopter pilots flying a high fidelity, aerodynamically realistic fixed-base rotorcraft flight simulator into hazardous conditions. We focus on one particular aviation application, but the results may be relevant to user interfaces in other operationally stressful environments.
Health Monitoring System for Car Seat
NASA Technical Reports Server (NTRS)
Elrod, Susan Vinz (Inventor); Dabney, Richard W. (Inventor)
2004-01-01
A health monitoring system for use with a child car seat has sensors mounted in the seat to monitor one or more health conditions of the seat's occupant. A processor monitors the sensor's signals and generates status signals related to the monitored conditions. A transmitter wireless transmits the status signals to a remotely located receiver. A signaling device coupled to the receiver produces at least one sensory (e.g., visual, audible, tactile) output based on the status signals.
Assessing Impact of Dual Sensor Enhanced Flight Vision Systems on Departure Performance
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.
2016-01-01
Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.
Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa
1995-09-01
Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.
An Operational Wake Vortex Sensor Using Pulsed Coherent Lidar
NASA Technical Reports Server (NTRS)
Barker, Ben C., Jr.; Koch, Grady J.; Nguyen, D. Chi
1998-01-01
NASA and FAA initiated a program in 1994 to develop methods of setting spacings for landing aircraft by incorporating information on the real-time behavior of aircraft wake vortices. The current wake separation standards were developed in the 1970's when there was relatively light airport traffic and a logical break point by which to categorize aircraft. Today's continuum of aircraft sizes and increased airport packing densities have created a need for re-evaluation of wake separation standards. The goals of this effort are to ensure that separation standards are adequate for safety and to reduce aircraft spacing for higher airport capacity. Of particular interest are the different requirements for landing under visual flight conditions and instrument flight conditions. Over the years, greater spacings have been established for instrument flight than are allowed for visual flight conditions. Preliminary studies indicate that the airline industry would save considerable money and incur fewer passenger delays if a dynamic spacing system could reduce separations at major hubs during inclement weather to the levels routinely achieved under visual flight conditions. The sensor described herein may become part of this dynamic spacing system known as the "Aircraft VOrtex Spacing System" (AVOSS) that will interface with a future air traffic control system. AVOSS will use vortex behavioral models and short-term weather prediction models in order to predict vortex behavior sufficiently into the future to allow dynamic separation standards to be generated. The wake vortex sensor will periodically provide data to validate AVOSS predictions. Feasibility of measuring wake vortices using a lidar was first demonstrated using a continuous wave (CW) system from NASA Marshall Space Flight Sensor and tested at the Volpe National Transportation Systems Center's wake vortex test site at JFK International Airport. Other applications of CW lidar for wake vortex measurement have been made more recently, including a system developed by the MIT Lincoln Laboratory. This lidar has been used for detailed measurements of wake vortex velocities in support of wake vortex model validation. The first measurements of wake vortices using a pulsed, lidar were made by Coherent Technologies, Inc. (CTI) using a 2 micron solid-state, flashlamp-pumped system operating at 5 Hz. This system was first deployed at Denver's Stapleton Airport. Pulsed lidar has been selected as the baseline technology for an operational sensor due to its longer range capability.
NASA Technical Reports Server (NTRS)
Doggett, William; Vazquez, Sixto
2000-01-01
A visualization system is being developed out of the need to monitor, interpret, and make decisions based on the information from several thousand sensors during experimental testing to facilitate development and validation of structural health monitoring algorithms. As an added benefit the system will enable complete real-time sensor assessment of complex test specimens. Complex structural specimens are routinely tested that have hundreds or thousands of sensors. During a test, it is impossible for a single researcher to effectively monitor all the sensors and subsequently interesting phenomena occur that are not recognized until post-test analysis. The ability to detect and alert the researcher to these unexpected phenomena as the test progresses will significantly enhance the understanding and utilization of complex test articles. Utilization is increased by the ability to halt a test when the health monitoring algorithm response is not satisfactory or when an unexpected phenomenon occurs, enabling focused investigation potentially through the installation of additional sensors. Often if the test continues, structural changes make it impossible to reproduce the conditions that exhibited the phenomena. The prohibitive time and costs associated with fabrication, sensoring, and subsequent testing of additional test articles generally makes it impossible to further investigate the phenomena. A scalable architecture is described to address the complex computational demands of structural health monitoring algorithm development and laboratory experimental test monitoring. The researcher monitors the test using a photographic quality 3D graphical model with actual sensor locations identified. In addition, researchers can quickly activate plots displaying time or load versus selected sensor response along with the expected values and predefined limits. The architecture has several key features. First, distributed dissimilar computers may be seamlessly integrated into the information flow. Second, virtual sensors may be defined that are complex functions of existing sensors or other virtual sensors. Virtual sensors represent a calculated value not directly measured by particular physical instrument. They can be used, for example, to represent the maximum difference in a range of sensors or the calculated buckling load based on the current strains. Third, the architecture enables autonomous response to preconceived events, where by the system can be configured to suspend or abort a test if a failure is detected in the load introduction system. Fourth, the architecture is designed to allow cooperative monitoring and control of the test progression from multiple stations both remote and local to the test system. To illustrate the architecture, a preliminary implementation is described monitoring the Stitched Composite Wing recently tested at LaRC.
Oversampling in virtual visual sensors as a means to recover higher modes of vibration
NASA Astrophysics Data System (ADS)
Shariati, Ali; Schumacher, Thomas
2015-03-01
Vibration-based structural health monitoring (SHM) techniques require modal information from the monitored structure in order to estimate the location and severity of damage. Natural frequencies also provide useful information to calibrate finite element models. There are several types of physical sensors that can measure the response over a range of frequencies. For most of those sensors however, accessibility, limitation of measurement points, wiring, and high system cost represent major challenges. Recent optical sensing approaches offer advantages such as easy access to visible areas, distributed sensing capabilities, and comparatively inexpensive data recording while having no wiring issues. In this research we propose a novel methodology to measure natural frequencies of structures using digital video cameras based on virtual visual sensors (VVS). In our initial study where we worked with commercially available inexpensive digital video cameras we found that for multiple degrees of freedom systems it is difficult to detect all of the natural frequencies simultaneously due to low quantization resolution. In this study we show how oversampling enabled by the use of high-end high-frame-rate video cameras enable recovering all of the three natural frequencies from a three story lab-scale structure.
Overseas testing of a multisensor landmine detection system: results and lessons learned
NASA Astrophysics Data System (ADS)
Keranen, Joe G.; Topolosky, Zeke
2009-05-01
The Nemesis detection system has been developed to provide an efficient and reliable unmanned, multi-sensor, groundbased platform to detect and mark landmines. The detection system consists of two detection sensor arrays: a Ground Penetrating Synthetic Aperture Radar (GPSAR) developed by Planning Systems, Inc. (PSI) and an electromagnetic induction (EMI) sensor array developed by Minelab Electronics, PTY. Limited. Under direction of the Night Vision and Electronic Sensors Directorate (NVESD), overseas testing was performed at Kampong Chhnang Test Center (KCTC), Cambodia, from May 12-30, 2008. Test objectives included: evaluation of detection performance, demonstration of real-time visualization and alarm generation, and evaluation of system operational efficiency. Testing was performed on five sensor test lanes, each consisting of a unique soil mixture and three off-road lanes which include curves, overgrowth, potholes, and non-uniform lane geometry. In this paper, we outline the test objectives, procedures, results, and lessons learned from overseas testing. We also describe the current state of the system, and plans for future enhancements and modifications including clutter rejection and feature-level fusion.
NASA Astrophysics Data System (ADS)
Arnhardt, C.; Fernandez-Steeger, T. M.; Walter, K.; Kallash, A.; Niemeyer, F.; Azzam, R.; Bill, R.
2007-12-01
The joint project Sensor based Landslide Early Warning System (SLEWS) aims at a systematic development of a prototyping alarm- and early warning system for the detection of mass movements by application of an ad hoc wireless sensor network (WSN). Next to the development of suitable sensor setups, sensor fusion and network fusion are applied to enhance data quality and reduce false alarm rates. Of special interest is the data retrieval, processing and visualization in GI-Systems. Therefore a suitable serviced based Spatial Data Infrastructure (SDI) will be developed with respect to existing and upcoming Open Geospatial Consortium (OGC) standards.The application of WSN provides a cheap and easy to set up solution for special monitoring and data gathering in large areas. Measurement data from different low-cost transducers for deformation observation (acceleration, displacement, tilting) is collected by distributed sensor nodes (motes), which interact separately and connect each other in a self-organizing manner. Data are collected and aggregated at the beacon (transmission station) and further operations like data pre-processing and compression can be performed. The WSN concept provides next to energy efficiency, miniaturization, real-time monitoring and remote operation, but also new monitoring strategies like sensor and network fusion. Since not only single sensors can be integrated at single motes either cross-validation or redundant sensor setups are possible to enhance data quality. The planned monitoring and information system will include a mobile infrastructure (information technologies and communication components) as well as methods and models to estimate surface deformation parameters (positioning systems). The measurements result in heterogeneous observation sets that have to be integrated in a common adjustment and filtering approach. Reliable real-time information will be obtained using a range of sensor input and algorithms, from which early warnings and prognosis may be derived. Implementation of sensor algorithms is an important task to form the business logic. This will be represented in self-contained web-based processing services (WPS). In the future different types of sensor networks can communicate via an infrastructure of OGC services using an interoperable way by standardized protocols as the Sensor Markup Language (SensorML) and Observations & Measurements Schema (O&M). Synchronous and asynchronous information services as the Sensor Alert Service (SAS) and the Web Notification Services (WNS) will provide defined users and user groups with time-critical readings from the observation site. Techniques using services for visualizing mapping data (WMS), meta data (CSW), vector (WFS) and raster data (WCS) will range from high detailed expert based output to fuzzy graphical warning elements.The expected results will be an advancement regarding classical alarm and early warning systems as the WSN are free scalable, extensible and easy to install.
Compact, self-contained enhanced-vision system (EVS) sensor simulator
NASA Astrophysics Data System (ADS)
Tiana, Carlo
2007-04-01
We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Service Oriented Architecture for Wireless Sensor Networks in Agriculture
NASA Astrophysics Data System (ADS)
Sawant, S. A.; Adinarayana, J.; Durbha, S. S.; Tripathy, A. K.; Sudharsan, D.
2012-08-01
Rapid advances in Wireless Sensor Network (WSN) for agricultural applications has provided a platform for better decision making for crop planning and management, particularly in precision agriculture aspects. Due to the ever-increasing spread of WSNs there is a need for standards, i.e. a set of specifications and encodings to bring multiple sensor networks on common platform. Distributed sensor systems when brought together can facilitate better decision making in agricultural domain. The Open Geospatial Consortium (OGC) through Sensor Web Enablement (SWE) provides guidelines for semantic and syntactic standardization of sensor networks. In this work two distributed sensing systems (Agrisens and FieldServer) were selected to implement OGC SWE standards through a Service Oriented Architecture (SOA) approach. Online interoperable data processing was developed through SWE components such as Sensor Model Language (SensorML) and Sensor Observation Service (SOS). An integrated web client was developed to visualize the sensor observations and measurements that enables the retrieval of crop water resources availability and requirements in a systematic manner for both the sensing devices. Further, the client has also the ability to operate in an interoperable manner with any other OGC standardized WSN systems. The study of WSN systems has shown that there is need to augment the operations / processing capabilities of SOS in order to understand about collected sensor data and implement the modelling services. Also, the very low cost availability of WSN systems in future, it is possible to implement the OGC standardized SWE framework for agricultural applications with open source software tools.
Integrated Collision Avoidance System for Air Vehicle
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2013-01-01
Collision with ground/water/terrain and midair obstacles is one of the common causes of severe aircraft accidents. The various data from the coremicro AHRS/INS/GPS Integration Unit, terrain data base, and object detection sensors are processed to produce collision warning audio/visual messages and collision detection and avoidance of terrain and obstacles through generation of guidance commands in a closed-loop system. The vision sensors provide more information for the Integrated System, such as, terrain recognition and ranging of terrain and obstacles, which plays an important role to the improvement of the Integrated Collision Avoidance System.
UAV-borne X-band radar for MAV collision avoidance
NASA Astrophysics Data System (ADS)
Moses, Allistair A.; Rutherford, Matthew J.; Kontitsis, Michail; Valavanis, Kimon P.
2011-05-01
Increased use of Miniature (Unmanned) Aerial Vehicles (MAVs) is coincidentally accompanied by a notable lack of sensors suitable for enabling further increases in levels of autonomy and consequently, integration into the National Airspace System (NAS). The majority of available sensors suitable for MAV integration are based on infrared detectors, focal plane arrays, optical and ultrasonic rangefinders, etc. These sensors are generally not able to detect or identify other MAV-sized targets and, when detection is possible, considerable computational power is typically required for successful identification. Furthermore, performance of visual-range optical sensor systems can suffer greatly when operating in the conditions that are typically encountered during search and rescue, surveillance, combat, and most common MAV applications. However, the addition of a miniature radar system can, in consort with other sensors, provide comprehensive target detection and identification capabilities for MAVs. This trend is observed in manned aviation where radar systems are the primary detection and identification sensor system. Within this document a miniature, lightweight X-Band radar system for use on a miniature (710mm rotor diameter) rotorcraft is described. We present analyses of the performance of the system in a realistic scenario with two MAVs. Additionally, an analysis of MAV navigation and collision avoidance behaviors is performed to determine the effect of integrating radar systems into MAV-class vehicles.
Vision sensor and dual MEMS gyroscope integrated system for attitude determination on moving base
NASA Astrophysics Data System (ADS)
Guo, Xiaoting; Sun, Changku; Wang, Peng; Huang, Lu
2018-01-01
To determine the relative attitude between the objects on a moving base and the base reference system by a MEMS (Micro-Electro-Mechanical Systems) gyroscope, the motion information of the base is redundant, which must be removed from the gyroscope. Our strategy is to add an auxiliary gyroscope attached to the reference system. The master gyroscope is to sense the total motion, and the auxiliary gyroscope is to sense the motion of the moving base. By a generalized difference method, relative attitude in a non-inertial frame can be determined by dual gyroscopes. With the vision sensor suppressing accumulative drift of the MEMS gyroscope, the vision and dual MEMS gyroscope integration system is formed. Coordinate system definitions and spatial transform are executed in order to fuse inertial and visual data from different coordinate systems together. And a nonlinear filter algorithm, Cubature Kalman filter, is used to fuse slow visual data and fast inertial data together. A practical experimental setup is built up and used to validate feasibility and effectiveness of our proposed attitude determination system in the non-inertial frame on the moving base.
Noise and contrast comparison of visual and infrared images of hazards as seen inside an automobile
NASA Astrophysics Data System (ADS)
Meitzler, Thomas J.; Bryk, Darryl; Sohn, Eui J.; Lane, Kimberly; Bednarz, David; Jusela, Daniel; Ebenstein, Samuel; Smith, Gregory H.; Rodin, Yelena; Rankin, James S., II; Samman, Amer M.
2000-06-01
The purpose of this experiment was to quantitatively measure driver performance for detecting potential road hazards in visual and infrared (IR) imagery of road scenes containing varying combinations of contrast and noise. This pilot test is a first step toward comparing various IR and visual sensors and displays for the purpose of an enhanced vision system to go inside the driver compartment. Visible and IR road imagery obtained was displayed on a large screen and on a PC monitor and subject response times were recorded. Based on the response time, detection probabilities were computed and compared to the known time of occurrence of a driving hazard. The goal was to see what combinations of sensor, contrast and noise enable subjects to have a higher detection probability of potential driving hazards.
Visualization of stress wave propagation via air-coupled acoustic emission sensors
NASA Astrophysics Data System (ADS)
Rivey, Joshua C.; Lee, Gil-Yong; Yang, Jinkyu; Kim, Youngkey; Kim, Sungchan
2017-02-01
We experimentally demonstrate the feasibility of visualizing stress waves propagating in plates using air-coupled acoustic emission sensors. Specifically, we employ a device that embeds arrays of microphones around an optical lens in a helical pattern. By implementing a beamforming technique, this remote sensing system allows us to record wave propagation events in situ via a single-shot and full-field measurement. This is a significant improvement over the conventional wave propagation tracking approaches based on laser doppler vibrometry or digital image correlation techniques. In this paper, we focus on demonstrating the feasibility and efficacy of this air-coupled acoustic emission technique by using large metallic plates exposed to external impacts. The visualization results of stress wave propagation will be shown under various impact scenarios. The proposed technique can be used to characterize and localize damage by detecting the attenuation, reflection, and scattering of stress waves that occurs at damage locations. This can ultimately lead to the development of new structural health monitoring and nondestructive evaluation methods for identifying hidden cracks or delaminations in metallic or composite plate structures, simultaneously negating the need for mounted contact sensors.
Real-time object tracking based on scale-invariant features employing bio-inspired hardware.
Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya
2016-09-01
We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alconis, Jenalyn; Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo; Lester Saddi, Ivan; Mongaya, Candeze; Figueroa, Kathleen Gay
2014-05-01
In response to the slew of disasters that devastates the Philippines on a regular basis, the national government put in place a program to address this problem. The Nationwide Operational Assessment of Hazards, or Project NOAH, consolidates the diverse scientific research being done and pushes the knowledge gained to the forefront of disaster risk reduction and management. Current activities of the project include installing rain gauges and water level sensors, conducting LIDAR surveys of critical river basins, geo-hazard mapping, and running information education campaigns. Approximately 700 automated weather stations and rain gauges installed in strategic locations in the Philippines hold the groundwork for the rainfall visualization system in the Project NOAH web portal at http://noah.dost.gov.ph. The system uses near real-time data from these stations installed in critical river basins. The sensors record the amount of rainfall in a particular area as point data updated every 10 to 15 minutes. The sensor sends the data to a central server either via GSM network or satellite data transfer for redundancy. The web portal displays the sensors as a placemarks layer on a map. When a placemark is clicked, it displays a graph of the rainfall data for the past 24 hours. The rainfall data is harvested by batch determined by a one-hour time frame. The program uses linear interpolation as the methodology implemented to visually represent a near real-time rainfall map. The algorithm allows very fast processing which is essential in near real-time systems. As more sensors are installed, precision is improved. This visualized dataset enables users to quickly discern where heavy rainfall is concentrated. It has proven invaluable on numerous occasions, such as last August 2013 when intense to torrential rains brought about by the enhanced Southwest Monsoon caused massive flooding in Metro Manila. Coupled with observations from Doppler imagery and water level sensors along the Marikina River, the local officials used this information and determined that the river would overflow in a few hours. It gave them a critical lead time to evacuate residents along the floodplain and no casualties were reported after the event.
1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.
Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi
2015-04-01
Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.
Noise Source Visualization Using a Digital Voice Recorder and Low-Cost Sensors
Cho, Yong Thung
2018-01-01
Accurate sound visualization of noise sources is required for optimal noise control. Typically, noise measurement systems require microphones, an analog-digital converter, cables, a data acquisition system, etc., which may not be affordable for potential users. Also, many such systems are not highly portable and may not be convenient for travel. Handheld personal electronic devices such as smartphones and digital voice recorders with relatively lower costs and higher performance have become widely available recently. Even though such devices are highly portable, directly implementing them for noise measurement may lead to erroneous results since such equipment was originally designed for voice recording. In this study, external microphones were connected to a digital voice recorder to conduct measurements and the input received was processed for noise visualization. In this way, a low cost, compact sound visualization system was designed and introduced to visualize two actual noise sources for verification with different characteristics: an enclosed loud speaker and a small air compressor. Reasonable accuracy of noise visualization for these two sources was shown over a relatively wide frequency range. This very affordable and compact sound visualization system can be used for many actual noise visualization applications in addition to educational purposes. PMID:29614038
Results and conclusions: perception sensor study for high speed autonomous operations
NASA Astrophysics Data System (ADS)
Schneider, Anne; LaCelle, Zachary; Lacaze, Alberto; Murphy, Karl; Close, Ryan
2016-05-01
Previous research has presented work on sensor requirements, specifications, and testing, to evaluate the feasibility of increasing autonomous vehicle system speeds. Discussions included the theoretical background for determining sensor requirements, and the basic test setup and evaluation criteria for comparing existing and prototype sensor designs. This paper will present and discuss the continuation of this work. In particular, this paper will focus on analyzing the problem via a real-world comparison of various sensor technology testing results, as opposed to previous work that utilized more of a theoretical approach. LADAR/LIDAR, radar, visual, and infrared sensors are considered in this research. Results are evaluated against the theoretical, desired perception specifications. Conclusions for utilizing a suite of perception sensors, to achieve the goal of doubling ground vehicle speeds, is also discussed.
A Solar Position Sensor Based on Image Vision.
Ruelas, Adolfo; Velázquez, Nicolás; Villa-Angulo, Carlos; Acuña, Alexis; Rosales, Pedro; Suastegui, José
2017-07-29
Solar collector technologies operate with better performance when the Sun beam direction is normal to the capturing surface, and for that to happen despite the relative movement of the Sun, solar tracking systems are used, therefore, there are rules and standards that need minimum accuracy for these tracking systems to be used in solar collectors' evaluation. Obtaining accuracy is not an easy job, hence in this document the design, construction and characterization of a sensor based on a visual system that finds the relative azimuth error and height of the solar surface of interest, is presented. With these characteristics, the sensor can be used as a reference in control systems and their evaluation. The proposed sensor is based on a microcontroller with a real-time clock, inertial measurement sensors, geolocation and a vision sensor, that obtains the angle of incidence from the sunrays' direction as well as the tilt and sensor position. The sensor's characterization proved how a measurement of a focus error or a Sun position can be made, with an accuracy of 0.0426° and an uncertainty of 0.986%, which can be modified to reach an accuracy under 0.01°. The validation of this sensor was determined showing the focus error on one of the best commercial solar tracking systems, a Kipp & Zonen SOLYS 2. To conclude, the solar tracking sensor based on a vision system meets the Sun detection requirements and components that meet the accuracy conditions to be used in solar tracking systems and their evaluation or, as a tracking and orientation tool, on photovoltaic installations and solar collectors.
An Intelligent Cooperative Visual Sensor Network for Urban Mobility
Leone, Giuseppe Riccardo; Petracca, Matteo; Salvetti, Ovidio; Azzarà, Andrea
2017-01-01
Smart cities are demanding solutions for improved traffic efficiency, in order to guarantee optimal access to mobility resources available in urban areas. Intelligent video analytics deployed directly on board embedded sensors offers great opportunities to gather highly informative data about traffic and transport, allowing reconstruction of a real-time neat picture of urban mobility patterns. In this paper, we present a visual sensor network in which each node embeds computer vision logics for analyzing in real time urban traffic. The nodes in the network share their perceptions and build a global and comprehensive interpretation of the analyzed scenes in a cooperative and adaptive fashion. This is possible thanks to an especially designed Internet of Things (IoT) compliant middleware which encompasses in-network event composition as well as full support of Machine-2-Machine (M2M) communication mechanism. The potential of the proposed cooperative visual sensor network is shown with two sample applications in urban mobility connected to the estimation of vehicular flows and parking management. Besides providing detailed results of each key component of the proposed solution, the validity of the approach is demonstrated by extensive field tests that proved the suitability of the system in providing a scalable, adaptable and extensible data collection layer for managing and understanding mobility in smart cities. PMID:29125535
An Intelligent Cooperative Visual Sensor Network for Urban Mobility.
Leone, Giuseppe Riccardo; Moroni, Davide; Pieri, Gabriele; Petracca, Matteo; Salvetti, Ovidio; Azzarà, Andrea; Marino, Francesco
2017-11-10
Smart cities are demanding solutions for improved traffic efficiency, in order to guarantee optimal access to mobility resources available in urban areas. Intelligent video analytics deployed directly on board embedded sensors offers great opportunities to gather highly informative data about traffic and transport, allowing reconstruction of a real-time neat picture of urban mobility patterns. In this paper, we present a visual sensor network in which each node embeds computer vision logics for analyzing in real time urban traffic. The nodes in the network share their perceptions and build a global and comprehensive interpretation of the analyzed scenes in a cooperative and adaptive fashion. This is possible thanks to an especially designed Internet of Things (IoT) compliant middleware which encompasses in-network event composition as well as full support of Machine-2-Machine (M2M) communication mechanism. The potential of the proposed cooperative visual sensor network is shown with two sample applications in urban mobility connected to the estimation of vehicular flows and parking management. Besides providing detailed results of each key component of the proposed solution, the validity of the approach is demonstrated by extensive field tests that proved the suitability of the system in providing a scalable, adaptable and extensible data collection layer for managing and understanding mobility in smart cities.
Measurement of beam profiles by terahertz sensor card with cholesteric liquid crystals.
Tadokoro, Yuzuru; Nishikawa, Tomohiro; Kang, Boyoung; Takano, Keisuke; Hangyo, Masanori; Nakajima, Makoto
2015-10-01
We demonstrate a sensor card with cholesteric liquid crystals (CLCs) for terahertz (THz) waves generated from a nonlinear crystal pumped by a table-top laser. A beam profile of the THz waves is successfully visualized as color change by the sensor card without additional electronic devices, power supplies, and connecting cables. Above the power density of 4.3 mW/cm2, the approximate beam diameter of the THz waves is measured using the hue image that is digitalized from the picture of the sensor card. The sensor card is low in cost, portable, and suitable for various situations such as THz imaging and alignment of THz systems.
Virtual reality: a reality for future military pilotage?
NASA Astrophysics Data System (ADS)
McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.
2009-05-01
Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.
NASA Technical Reports Server (NTRS)
Pavel, M.
1993-01-01
The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.
Sensor supported pilot assistance for helicopter flight in DVE
NASA Astrophysics Data System (ADS)
Waanders, Tim; Münsterer, T.; Kress, M.
2013-05-01
Helicopter operations at low altitude are to this day only performed under VFR conditions in which safe piloting of the aircraft relies on the pilot's visual perception of the outside environment. However, there are situations in which a deterioration of visibility conditions may cause the pilot to lose important visual cues thereby increasing workload and compromising flight safety and mission effectiveness. This paper reports on a pilot assistance system for all phases of flight which is intended to: • Provide navigational support and mission management • Support landings/take-offs in unknown environment and in DVE • Enhance situational awareness in DVE • Provide obstacle and terrain surface detection and warning • Provide upload, sensor based update and download of database information for debriefing and later missions. The system comprises a digital terrain and obstacle database, tactical information, flight plan management combined with an active 3D sensor enabling the above mentioned functionalities. To support pilots during operations in DVE, an intuitive 3D/2D cueing through both head-up and head-down means is proposed to retain situational awareness. This paper further describes the system concept and will elaborate on results of simulator trials in which the functionality was evaluated by operational pilots in realistic and demanding scenarios such as a SAR mission to be performed in mountainous area under different visual conditions. The objective of the simulator trials was to evaluate the functional integration and HMI definition for the NH90 Tactical Transport Helicopter.
Measurement and Control System Based on Wireless Senor Network for Granary
NASA Astrophysics Data System (ADS)
Song, Jian
A wireless measurement and control system for granary is developed for the sake of overcoming the shortcoming of the wired measurement and control system such as complex wiring and low anti-interference capacity. In this system, Zigbee technology is applied with Zigbee protocol stack development platform by TI, and wireless senor network is used to collect and control the temperature and the humidity. It is composed of the upper PC, central control node based on CC2530, sensor nodes, sensor modules and the executive device. The wireless sensor node is programmed by C language in IAR Embedded Workbench for MCS-51 Evaluation environment. The upper PC control system software is developed based on Visual C++ 6.0 platform. It is shown by experiments that data transmission in the system is accurate and reliable and the error of the temperature and humidity is below 2%, meeting the functional requirements for the granary measurement and control system.
Design, Control and in Situ Visualization of Gas Nitriding Processes
Ratajski, Jerzy; Olik, Roman; Suszko, Tomasz; Dobrodziej, Jerzy; Michalski, Jerzy
2010-01-01
The article presents a complex system of design, in situ visualization and control of the commonly used surface treatment process: the gas nitriding process. In the computer design conception, analytical mathematical models and artificial intelligence methods were used. As a result, possibilities were obtained of the poly-optimization and poly-parametric simulations of the course of the process combined with a visualization of the value changes of the process parameters in the function of time, as well as possibilities to predict the properties of nitrided layers. For in situ visualization of the growth of the nitrided layer, computer procedures were developed which make use of the results of the correlations of direct and differential voltage and time runs of the process result sensor (magnetic sensor), with the proper layer growth stage. Computer procedures make it possible to combine, in the duration of the process, the registered voltage and time runs with the models of the process. PMID:22315536
Falcon: A Temporal Visual Analysis System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.
2016-09-05
Flexible visible exploration of long, high-resolution time series from multiple sensor streams is a challenge in several domains. Falcon is a visual analytics approach that helps researchers acquire a deep understanding of patterns in log and imagery data. Falcon allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations with multiple levels of detail. These capabilities are applicable to the analysis of any quantitative time series.
Characterize Aerosols from MODIS/MISR/OMI/MERRA-2: Dynamic Image Browse Perspective
NASA Astrophysics Data System (ADS)
Wei, J. C.; Yang, W.; Shen, S.; Zhao, P.; Albayrak, A.; Johnson, J. E.; Kempler, S. J.; Pham, L.
2016-12-01
Among the known atmospheric constituents, aerosols still represent the greatest uncertainty in climate research. To understand the uncertainty is to bring altogether of observational (in-situ and remote sensing) and modeling datasets and inter-compare them synergistically for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if these earth science data (satellite and modeling) are well utilized and interpreted. Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite-borne sensors routinely measure aerosols. There is often disagreement between similar aerosol parameters retrieved from different sensors, leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) have developed multiple MAPSS (Multi-sensor Aerosol Products Sampling System) applications as a part of Giovanni (Geospatial Interactive Online Visualization and Analysis Interface) data visualization and analysis tool since 2007. The MAPSS database provides spatio-temporal statistics for multiple spatial spaceborne Level 2 aerosol products (MODIS Terra, MODIS Aqua, MISR, POLDER, OMI, CALIOP, SeaWiFS Deep Blue, and VIIRS) sampled over AERONET ground stations. In this presentation, I will demonstrate a new visualization service (NASA Level 2 Data Quality Visualization, DQViz) supporting various visualization and data accessing capabilities from satellite Level 2 (MODIS/MISR/OMI) and long term assimilated aerosols from NASA Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2 displaying at their own native physical-retrieved spatial resolution. Functionality will include selecting data sources (e.g., multiple parameters under the same measurement), defining area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting and reformatting.
User-interactive electronic skin for instantaneous pressure visualization
NASA Astrophysics Data System (ADS)
Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali
2013-10-01
Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components—thin-film transistor, pressure sensor and OLED arrays—are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.
User-interactive electronic skin for instantaneous pressure visualization.
Wang, Chuan; Hwang, David; Yu, Zhibin; Takei, Kuniharu; Park, Junwoo; Chen, Teresa; Ma, Biwu; Javey, Ali
2013-10-01
Electronic skin (e-skin) presents a network of mechanically flexible sensors that can conformally wrap irregular surfaces and spatially map and quantify various stimuli. Previous works on e-skin have focused on the optimization of pressure sensors interfaced with an electronic readout, whereas user interfaces based on a human-readable output were not explored. Here, we report the first user-interactive e-skin that not only spatially maps the applied pressure but also provides an instantaneous visual response through a built-in active-matrix organic light-emitting diode display with red, green and blue pixels. In this system, organic light-emitting diodes (OLEDs) are turned on locally where the surface is touched, and the intensity of the emitted light quantifies the magnitude of the applied pressure. This work represents a system-on-plastic demonstration where three distinct electronic components--thin-film transistor, pressure sensor and OLED arrays--are monolithically integrated over large areas on a single plastic substrate. The reported e-skin may find a wide range of applications in interactive input/control devices, smart wallpapers, robotics and medical/health monitoring devices.
Lensless high-resolution photoacoustic imaging scanner for in vivo skin imaging
NASA Astrophysics Data System (ADS)
Ida, Taiichiro; Iwazaki, Hideaki; Omuro, Toshiyuki; Kawaguchi, Yasushi; Tsunoi, Yasuyuki; Kawauchi, Satoko; Sato, Shunichi
2018-02-01
We previously launched a high-resolution photoacoustic (PA) imaging scanner based on a unique lensless design for in vivo skin imaging. The design, imaging algorithm and characteristics of the system are described in this paper. Neither an optical lens nor an acoustic lens is used in the system. In the imaging head, four sensor elements are arranged quadrilaterally, and by checking the phase differences for PA waves detected with these four sensors, a set of PA signals only originating from a chromophore located on the sensor center axis is extracted for constructing an image. A phantom study using a carbon fiber showed a depth-independent horizontal resolution of 84.0 ± 3.5 µm, and the scan direction-dependent variation of PA signals was about ± 20%. We then performed imaging of vasculature phantoms: patterns of red ink lines with widths of 100 or 200 μm formed in an acrylic block co-polymer. The patterns were visualized with high contrast, showing the capability for imaging arterioles and venues in the skin. Vasculatures in rat burn models and healthy human skin were also clearly visualized in vivo.
NASA Astrophysics Data System (ADS)
Feeley, J.; Zajic, J.; Metcalf, A.; Baucom, T.
2009-12-01
The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) Calibration and Validation (Cal/Val) team is planning post-launch activities to calibrate the NPP sensors and validate Sensor Data Records (SDRs). The IPO has developed a web-based data collection and visualization tool in order to effectively collect, coordinate, and manage the calibration and validation tasks for the OMPS, ATMS, CrIS, and VIIRS instruments. This tool is accessible to the multi-institutional Cal/Val teams consisting of the Prime Contractor and Government Cal/Val leads along with the NASA NPP Mission team, and is used for mission planning and identification/resolution of conflicts between sensor activities. Visualization techniques aid in displaying task dependencies, including prerequisites and exit criteria, allowing for the identification of a critical path. This presentation will highlight how the information is collected, displayed, and used to coordinate the diverse instrument calibration/validation teams.
Analysis of simulated image sequences from sensors for restricted-visibility operations
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar
1991-01-01
A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.
Toward a digital camera to rival the human eye
NASA Astrophysics Data System (ADS)
Skorka, Orit; Joseph, Dileepan
2011-07-01
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
Cheung, Weng-Fong; Lin, Tzu-Hsuan; Lin, Yu-Cheng
2018-02-02
In recent years, many studies have focused on the application of advanced technology as a way to improve management of construction safety management. A Wireless Sensor Network (WSN), one of the key technologies in Internet of Things (IoT) development, enables objects and devices to sense and communicate environmental conditions; Building Information Modeling (BIM), a revolutionary technology in construction, integrates database and geometry into a digital model which provides a visualized way in all construction lifecycle management. This paper integrates BIM and WSN into a unique system which enables the construction site to visually monitor the safety status via a spatial, colored interface and remove any hazardous gas automatically. Many wireless sensor nodes were placed on an underground construction site and to collect hazardous gas level and environmental condition (temperature and humidity) data, and in any region where an abnormal status is detected, the BIM model will alert the region and an alarm and ventilator on site will start automatically for warning and removing the hazard. The proposed system can greatly enhance the efficiency in construction safety management and provide an important reference information in rescue tasks. Finally, a case study demonstrates the applicability of the proposed system and the practical benefits, limitations, conclusions, and suggestions are summarized for further applications.
SmartPort: A Platform for Sensor Data Monitoring in a Seaport Based on FIWARE
Fernández, Pablo; Santana, José Miguel; Ortega, Sebastián; Trujillo, Agustín; Suárez, José Pablo; Domínguez, Conrado; Santana, Jaisiel; Sánchez, Alejandro
2016-01-01
Seaport monitoring and management is a significant research area, in which infrastructure automatically collects big data sets that lead the organization in its multiple activities. Thus, this problem is heavily related to the fields of data acquisition, transfer, storage, big data analysis and information visualization. Las Palmas de Gran Canaria port is a good example of how a seaport generates big data volumes through a network of sensors. They are placed on meteorological stations and maritime buoys, registering environmental parameters. Likewise, the Automatic Identification System (AIS) registers several dynamic parameters about the tracked vessels. However, such an amount of data is useless without a system that enables a meaningful visualization and helps make decisions. In this work, we present SmartPort, a platform that offers a distributed architecture for the collection of the port sensors’ data and a rich Internet application that allows the user to explore the geolocated data. The presented SmartPort tool is a representative, promising and inspiring approach to manage and develop a smart system. It covers a demanding need for big data analysis and visualization utilities for managing complex infrastructures, such as a seaport. PMID:27011192
mHealth Visual Discovery Dashboard.
Fang, Dezhi; Hohman, Fred; Polack, Peter; Sarker, Hillol; Kahng, Minsuk; Sharmin, Moushumi; al'Absi, Mustafa; Chau, Duen Horng
2017-09-01
We present Discovery Dashboard, a visual analytics system for exploring large volumes of time series data from mobile medical field studies. Discovery Dashboard offers interactive exploration tools and a data mining motif discovery algorithm to help researchers formulate hypotheses, discover trends and patterns, and ultimately gain a deeper understanding of their data. Discovery Dashboard emphasizes user freedom and flexibility during the data exploration process and enables researchers to do things previously challenging or impossible to do - in the web-browser and in real time. We demonstrate our system visualizing data from a mobile sensor study conducted at the University of Minnesota that included 52 participants who were trying to quit smoking.
mHealth Visual Discovery Dashboard
Fang, Dezhi; Hohman, Fred; Polack, Peter; Sarker, Hillol; Kahng, Minsuk; Sharmin, Moushumi; al'Absi, Mustafa; Chau, Duen Horng
2018-01-01
We present Discovery Dashboard, a visual analytics system for exploring large volumes of time series data from mobile medical field studies. Discovery Dashboard offers interactive exploration tools and a data mining motif discovery algorithm to help researchers formulate hypotheses, discover trends and patterns, and ultimately gain a deeper understanding of their data. Discovery Dashboard emphasizes user freedom and flexibility during the data exploration process and enables researchers to do things previously challenging or impossible to do — in the web-browser and in real time. We demonstrate our system visualizing data from a mobile sensor study conducted at the University of Minnesota that included 52 participants who were trying to quit smoking. PMID:29354812
Visual-perceptual mismatch in robotic surgery.
Abiri, Ahmad; Tao, Anna; LaRocca, Meg; Guan, Xingmin; Askari, Syed J; Bisley, James W; Dutson, Erik P; Grundfest, Warren S
2017-08-01
The principal objective of the experiment was to analyze the effects of the clutch operation of robotic surgical systems on the performance of the operator. The relative coordinate system introduced by the clutch operation can introduce a visual-perceptual mismatch which can potentially have negative impact on a surgeon's performance. We also assess the impact of the introduction of additional tactile sensory information on reducing the impact of visual-perceptual mismatch on the performance of the operator. We asked 45 novice subjects to complete peg transfers using the da Vinci IS 1200 system with grasper-mounted, normal force sensors. The task involves picking up a peg with one of the robotic arms, passing it to the other arm, and then placing it on the opposite side of the view. Subjects were divided into three groups: aligned group (no mismatch), the misaligned group (10 cm z axis mismatch), and the haptics-misaligned group (haptic feedback and z axis mismatch). Each subject performed the task five times, during which the grip force, time of completion, and number of faults were recorded. Compared to the subjects that performed the tasks using a properly aligned controller/arm configuration, subjects with a single-axis misalignment showed significantly more peg drops (p = 0.011) and longer time to completion (p < 0.001). Additionally, it was observed that addition of tactile feedback helps reduce the negative effects of visual-perceptual mismatch in some cases. Grip force data recorded from grasper-mounted sensors showed no difference between the different groups. The visual-perceptual mismatch created by the misalignment of the robotic controls relative to the robotic arms has a negative impact on the operator of a robotic surgical system. Introduction of other sensory information and haptic feedback systems can help in potentially reducing this effect.
A Solar Position Sensor Based on Image Vision
Ruelas, Adolfo; Velázquez, Nicolás; Villa-Angulo, Carlos; Rosales, Pedro; Suastegui, José
2017-01-01
Solar collector technologies operate with better performance when the Sun beam direction is normal to the capturing surface, and for that to happen despite the relative movement of the Sun, solar tracking systems are used, therefore, there are rules and standards that need minimum accuracy for these tracking systems to be used in solar collectors’ evaluation. Obtaining accuracy is not an easy job, hence in this document the design, construction and characterization of a sensor based on a visual system that finds the relative azimuth error and height of the solar surface of interest, is presented. With these characteristics, the sensor can be used as a reference in control systems and their evaluation. The proposed sensor is based on a microcontroller with a real-time clock, inertial measurement sensors, geolocation and a vision sensor, that obtains the angle of incidence from the sunrays’ direction as well as the tilt and sensor position. The sensor’s characterization proved how a measurement of a focus error or a Sun position can be made, with an accuracy of 0.0426° and an uncertainty of 0.986%, which can be modified to reach an accuracy under 0.01°. The validation of this sensor was determined showing the focus error on one of the best commercial solar tracking systems, a Kipp & Zonen SOLYS 2. To conclude, the solar tracking sensor based on a vision system meets the Sun detection requirements and components that meet the accuracy conditions to be used in solar tracking systems and their evaluation or, as a tracking and orientation tool, on photovoltaic installations and solar collectors. PMID:28758935
Interpretation of remotely sensed data and its applications in oceanography
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Tanaka, K.; Inostroza, H. M.; Verdesio, J. J.
1982-01-01
The methodology of interpretation of remote sensing data and its oceanographic applications are described. The elements of image interpretation for different types of sensors are discussed. The sensors utilized are the multispectral scanner of LANDSAT, and the thermal infrared of NOAA and geostationary satellites. Visual and automatic data interpretation in studies of pollution, the Brazil current system, and upwelling along the southeastern Brazilian coast are compared.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Novel compact panomorph lens based vision system for monitoring around a vehicle
NASA Astrophysics Data System (ADS)
Thibault, Simon
2008-04-01
Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.
A portfolio of products from the rapid terrain visualization interferometric SAR
NASA Astrophysics Data System (ADS)
Bickel, Douglas L.; Doerry, Armin W.
2007-04-01
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor was built by Sandia National Laboratories for the Joint Programs Sustainment and Development (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieved better than HRTe Level IV position accuracy in near real-time. The system was flown on a deHavilland DHC-7 Army aircraft. This paper presents a collection of images and data products from the Rapid Terrain Visualization interferometric synthetic aperture radar. The imagery includes orthorectified images and DEMs from the RTV interferometric SAR radar.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, T; Kim, D; Kang, S
Purpose: Abdominal compression is known to be effective but, often makes external-marker-based monitoring of breathing motion not feasible. In this study, we developed and evaluated a system that enables both abdominal compression and monitoring of residual abdominal motion simultaneously. The system can also provide visual-biofeedback capability. Methods: The system developed consists of a compression belt, an abdominal motion monitoring sensor (gas pressure sensor) and a visual biofeedback device. The compression belt was designed to be able to compress the frontal side of the abdomen. The pressure level of the belt is controlled by air volume and monitored in real timemore » using the gas pressure sensor. The system displays not only the real-time monitoring curve but also a guiding respiration model (e.g., a breath hold or shallow breathing curve) simultaneously on the head mounted display to help patients keep their breathing pattern as consistent as possible. Three healthy volunteers were enrolled in this pilot study and respiratory signals (pressure variations) were obtained both with and without effective abdominal compression to investigate the feasibility of the developed system. Two guidance patterns, breath hold and shallow breathing, were tested. Results: All volunteers showed smaller abdominal motion with compression (about 40% amplitude reduction compared to without compression). However, the system was able to monitor residual abdominal motion for all volunteers. Even under abdominal compression, in addition, it was possible to make the subjects successfully follow the guide patterns using the visual biofeedback system. Conclusion: The developed abdominal compression & respiratory guiding system was feasible for residual abdominal motion management. It is considered that the system can be used for a respiratory motion involved radiation therapy while maintaining the merit of abdominal compression. This work was supported by the Radiation Technology R&D program (No. 2013M2A2A7043498) and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less
Jo, Byung Wan; Jo, Jun Ho; Khan, Rana Muhammad Asad; Kim, Jung Hoon; Lee, Yun Sung
2018-05-23
Structure Health Monitoring is a topic of great interest in port structures due to the ageing of structures and the limitations of evaluating structures. This paper presents a cloud computing-based stability evaluation platform for a pier type port structure using Fiber Bragg Grating (FBG) sensors in a system consisting of a FBG strain sensor, FBG displacement gauge, FBG angle meter, gateway, and cloud computing-based web server. The sensors were installed on core components of the structure and measurements were taken to evaluate the structures. The measurement values were transmitted to the web server via the gateway to analyze and visualize them. All data were analyzed and visualized in the web server to evaluate the structure based on the safety evaluation index (SEI). The stability evaluation platform for pier type port structures involves the efficient monitoring of the structures which can be carried out easily anytime and anywhere by converging new technologies such as cloud computing and FBG sensors. In addition, the platform has been successfully implemented at “Maryang Harbor” situated in Maryang-Meyon of Korea to test its durability.
Lee, Inbok; Zhang, Aoqi; Lee, Changgil; Park, Seunghee
2016-01-01
This paper proposes a non-contact nondestructive evaluation (NDE) technique that uses laser-induced ultrasonic waves to visualize corrosion damage in aluminum alloy plate structures. The non-contact, pulsed-laser ultrasonic measurement system generates ultrasonic waves using a galvanometer-based Q-switched Nd:YAG laser and measures the ultrasonic waves using a piezoelectric (PZT) sensor. During scanning, a wavefield can be acquired by changing the excitation location of the laser point and measuring waves using the PZT sensor. The corrosion damage can be detected in the wavefield snapshots using the scattering characteristics of the waves that encounter corrosion. The structural damage is visualized by calculating the logarithmic values of the root mean square (RMS), with a weighting parameter to compensate for the attenuation caused by geometrical spreading and dispersion of the waves. An intact specimen is used to conduct a comparison with corrosion at different depths and sizes in other specimens. Both sides of the plate are scanned with the same scanning area to observe the effect of the location where corrosion has formed. The results show that the damage can be successfully visualized for almost all cases using the RMS-based functions, whether it formed on the front or back side. Also, the system is confirmed to have distinguished corroded areas at different depths. PMID:27999252
Lee, Inbok; Zhang, Aoqi; Lee, Changgil; Park, Seunghee
2016-12-16
This paper proposes a non-contact nondestructive evaluation (NDE) technique that uses laser-induced ultrasonic waves to visualize corrosion damage in aluminum alloy plate structures. The non-contact, pulsed-laser ultrasonic measurement system generates ultrasonic waves using a galvanometer-based Q-switched Nd:YAG laser and measures the ultrasonic waves using a piezoelectric (PZT) sensor. During scanning, a wavefield can be acquired by changing the excitation location of the laser point and measuring waves using the PZT sensor. The corrosion damage can be detected in the wavefield snapshots using the scattering characteristics of the waves that encounter corrosion. The structural damage is visualized by calculating the logarithmic values of the root mean square (RMS), with a weighting parameter to compensate for the attenuation caused by geometrical spreading and dispersion of the waves. An intact specimen is used to conduct a comparison with corrosion at different depths and sizes in other specimens. Both sides of the plate are scanned with the same scanning area to observe the effect of the location where corrosion has formed. The results show that the damage can be successfully visualized for almost all cases using the RMS-based functions, whether it formed on the front or back side. Also, the system is confirmed to have distinguished corroded areas at different depths.
Wearable Smart System for Visually Impaired People
2018-01-01
In this paper, we present a wearable smart system to help visually impaired persons (VIPs) walk by themselves through the streets, navigate in public places, and seek assistance. The main components of the system are a microcontroller board, various sensors, cellular communication and GPS modules, and a solar panel. The system employs a set of sensors to track the path and alert the user of obstacles in front of them. The user is alerted by a sound emitted through a buzzer and by vibrations on the wrist, which is helpful when the user has hearing loss or is in a noisy environment. In addition, the system alerts people in the surroundings when the user stumbles over or requires assistance, and the alert, along with the system location, is sent as a phone message to registered mobile phones of family members and caregivers. In addition, the registered phones can be used to retrieve the system location whenever required and activate real-time tracking of the VIP. We tested the system prototype and verified its functionality and effectiveness. The proposed system has more features than other similar systems. We expect it to be a useful tool to improve the quality of life of VIPs. PMID:29533970
Wearable Smart System for Visually Impaired People.
Ramadhan, Ali Jasim
2018-03-13
In this paper, we present a wearable smart system to help visually impaired persons (VIPs) walk by themselves through the streets, navigate in public places, and seek assistance. The main components of the system are a microcontroller board, various sensors, cellular communication and GPS modules, and a solar panel. The system employs a set of sensors to track the path and alert the user of obstacles in front of them. The user is alerted by a sound emitted through a buzzer and by vibrations on the wrist, which is helpful when the user has hearing loss or is in a noisy environment. In addition, the system alerts people in the surroundings when the user stumbles over or requires assistance, and the alert, along with the system location, is sent as a phone message to registered mobile phones of family members and caregivers. In addition, the registered phones can be used to retrieve the system location whenever required and activate real-time tracking of the VIP. We tested the system prototype and verified its functionality and effectiveness. The proposed system has more features than other similar systems. We expect it to be a useful tool to improve the quality of life of VIPs.
Gaze control for an active camera system by modeling human pursuit eye movements
NASA Astrophysics Data System (ADS)
Toelg, Sebastian
1992-11-01
The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.
NASA Astrophysics Data System (ADS)
Kachejian, Kerry C.; Vujcic, Doug
1999-07-01
The Tactical Visualization Module (TVM) research effort will develop and demonstrate a portable, tactical information system to enhance the situational awareness of individual warfighters and small military units by providing real-time access to manned and unmanned aircraft, tactically mobile robots, and unattended sensors. TVM consists of a family of portable and hand-held devices being advanced into a next- generation, embedded capability. It enables warfighters to visualize the tactical situation by providing real-time video, imagery, maps, floor plans, and 'fly-through' video on demand. When combined with unattended ground sensors, such as Combat- Q, TVM permits warfighters to validate and verify tactical targets. The use of TVM results in faster target engagement times, increased survivability, and reduction of the potential for fratricide. TVM technology can support both mounted and dismounted tactical forces involved in land, sea, and air warfighting operations. As a PCMCIA card, TVM can be embedded in portable, hand-held, and wearable PCs. Thus, it leverages emerging tactical displays including flat-panel, head-mounted displays. The end result of the program will be the demonstration of the system with U.S. Army and USMC personnel in an operational environment. Raytheon Systems Company, the U.S. Army Soldier Systems Command -- Natick RDE Center (SSCOM- NRDEC) and the Defense Advanced Research Projects Agency (DARPA) are partners in developing and demonstrating the TVM technology.
Towards Autonomous Inspection of Space Systems Using Mobile Robotic Sensor Platforms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Saad, Ashraf; Litt, Jonathan S.
2007-01-01
The space transportation systems required to support NASA's Exploration Initiative will demand a high degree of reliability to ensure mission success. This reliability can be realized through autonomous fault/damage detection and repair capabilities. It is crucial that such capabilities are incorporated into these systems since it will be impractical to rely upon Extra-Vehicular Activity (EVA), visual inspection or tele-operation due to the costly, labor-intensive and time-consuming nature of these methods. One approach to achieving this capability is through the use of an autonomous inspection system comprised of miniature mobile sensor platforms that will cooperatively perform high confidence inspection of space vehicles and habitats. This paper will discuss the efforts to develop a small scale demonstration test-bed to investigate the feasibility of using autonomous mobile sensor platforms to perform inspection operations. Progress will be discussed in technology areas including: the hardware implementation and demonstration of robotic sensor platforms, the implementation of a hardware test-bed facility, and the investigation of collaborative control algorithms.
Vision Guided Intelligent Robot Design And Experiments
NASA Astrophysics Data System (ADS)
Slutzky, G. D.; Hall, E. L.
1988-02-01
The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.
Advanced integrated enhanced vision systems
NASA Astrophysics Data System (ADS)
Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha
2003-09-01
In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.
Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets
ERIC Educational Resources Information Center
Wang, Huadong
2013-01-01
In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future…
Modeling for Visual Feature Extraction Using Spiking Neural Networks
NASA Astrophysics Data System (ADS)
Kimura, Ichiro; Kuroe, Yasuaki; Kotera, Hiromichi; Murata, Tomoya
This paper develops models for “visual feature extraction” in biological systems by using “spiking neural network (SNN)”. The SNN is promising for developing the models because the information is encoded and processed by spike trains similar to biological neural networks. Two architectures of SNN are proposed for modeling the directionally selective and the motion parallax cell in neuro-sensory systems and they are trained so as to possess actual biological responses of each cell. To validate the developed models, their representation ability is investigated and their visual feature extraction mechanisms are discussed from the neurophysiological viewpoint. It is expected that this study can be the first step to developing a sensor system similar to the biological systems and also a complementary approach to investigating the function of the brain.
International Space Station Future Correlation Analysis Improvements
NASA Technical Reports Server (NTRS)
Laible, Michael R.; Pinnamaneni, Murthy; Sugavanam, Sujatha; Grygier, Michael
2018-01-01
Ongoing modal analyses and model correlation are performed on different configurations of the International Space Station (ISS). These analyses utilize on-orbit dynamic measurements collected using four main ISS instrumentation systems: External Wireless Instrumentation System (EWIS), Internal Wireless Instrumentation System (IWIS), Space Acceleration Measurement System (SAMS), and Structural Dynamic Measurement System (SDMS). Remote Sensor Units (RSUs) are network relay stations that acquire flight data from sensors. Measured data is stored in the Remote Sensor Unit (RSU) until it receives a command to download data via RF to the Network Control Unit (NCU). Since each RSU has its own clock, it is necessary to synchronize measurements before analysis. Imprecise synchronization impacts analysis results. A study was performed to evaluate three different synchronization techniques: (i) measurements visually aligned to analytical time-response data using model comparison, (ii) Frequency Domain Decomposition (FDD), and (iii) lag from cross-correlation to align measurements. This paper presents the results of this study.
A system for respiratory motion detection using optical fibers embedded into textiles.
D'Angelo, L T; Weber, S; Honda, Y; Thiel, T; Narbonneau, F; Luth, T C
2008-01-01
In this contribution, a first prototype for mobile respiratory motion detection using optical fibers embedded into textiles is presented. The developed system consists of a T-shirt with an integrated fiber sensor and a portable monitoring unit with a wireless communication link enabling the data analysis and visualization on a PC. A great effort is done worldwide to develop mobile solutions for health monitoring of vital signs for patients needing continuous medical care. Wearable, comfortable and smart textiles incorporating sensors are good approaches to solve this problem. In most of the cases, electrical sensors are integrated, showing significant limits such as for the monitoring of anaesthetized patients during Magnetic Resonance Imaging (MRI). OFSETH (Optical Fibre Embedded into technical Textile for Healthcare) uses optical sensor technologies to extend the current capabilities of medical technical textiles.
Neural networks for satellite remote sensing and robotic sensor interpretation
NASA Astrophysics Data System (ADS)
Martens, Siegfried
Remote sensing of forests and robotic sensor fusion can be viewed, in part, as supervised learning problems, mapping from sensory input to perceptual output. This dissertation develops ARTMAP neural networks for real-time category learning, pattern recognition, and prediction tailored to remote sensing and robotics applications. Three studies are presented. The first two use ARTMAP to create maps from remotely sensed data, while the third uses an ARTMAP system for sensor fusion on a mobile robot. The first study uses ARTMAP to predict vegetation mixtures in the Plumas National Forest based on spectral data from the Landsat Thematic Mapper satellite. While most previous ARTMAP systems have predicted discrete output classes, this project develops new capabilities for multi-valued prediction. On the mixture prediction task, the new network is shown to perform better than maximum likelihood and linear mixture models. The second remote sensing study uses an ARTMAP classification system to evaluate the relative importance of spectral and terrain data for map-making. This project has produced a large-scale map of remotely sensed vegetation in the Sierra National Forest. Network predictions are validated with ground truth data, and maps produced using the ARTMAP system are compared to a map produced by human experts. The ARTMAP Sierra map was generated in an afternoon, while the labor intensive expert method required nearly a year to perform the same task. The robotics research uses an ARTMAP system to integrate visual information and ultrasonic sensory information on a B14 mobile robot. The goal is to produce a more accurate measure of distance than is provided by the raw sensors. ARTMAP effectively combines sensory sources both within and between modalities. The improved distance percept is used to produce occupancy grid visualizations of the robot's environment. The maps produced point to specific problems of raw sensory information processing and demonstrate the benefits of using a neural network system for sensor fusion.
Real-time tracking of objects for a KC-135 microgravity experiment
NASA Technical Reports Server (NTRS)
Littlefield, Mark L.
1994-01-01
The design of a visual tracking system for use on the Extra-Vehicular Activity Helper/Retriever (EVAHR) is discussed. EVAHR is an autonomous robot designed to perform numerous tasks in an orbital microgravity environment. Since the ability to grasp a freely translating and rotating object is vital to the robot's mission, the EVAHR must analyze range image generated by the primary sensor. This allows EVAHR to locate and focus its sensors so that an accurate set of object poses can be determined and a grasp strategy planned. To test the visual tracking system being developed, a mathematical simulation was used to model the space station environment and maintain dynamics on the EVAHR and any other free floating objects. A second phase of the investigation consists of a series of experiments carried out aboard a KC-135 aircraft flying a parabolic trajectory to simulate microgravity.
Workflow-Oriented Cyberinfrastructure for Sensor Data Analytics
NASA Astrophysics Data System (ADS)
Orcutt, J. A.; Rajasekar, A.; Moore, R. W.; Vernon, F.
2015-12-01
Sensor streams comprise an increasingly large part of Earth Science data. Analytics based on sensor data require an easy way to perform operations such as acquisition, conversion to physical units, metadata linking, sensor fusion, analysis and visualization on distributed sensor streams. Furthermore, embedding real-time sensor data into scientific workflows is of growing interest. We have implemented a scalable networked architecture that can be used to dynamically access packets of data in a stream from multiple sensors, and perform synthesis and analysis across a distributed network. Our system is based on the integrated Rule Oriented Data System (irods.org), which accesses sensor data from the Antelope Real Time Data System (brtt.com), and provides virtualized access to collections of data streams. We integrate real-time data streaming from different sources, collected for different purposes, on different time and spatial scales, and sensed by different methods. iRODS, noted for its policy-oriented data management, brings to sensor processing features and facilities such as single sign-on, third party access control lists ( ACLs), location transparency, logical resource naming, and server-side modeling capabilities while reducing the burden on sensor network operators. Rich integrated metadata support also makes it straightforward to discover data streams of interest and maintain data provenance. The workflow support in iRODS readily integrates sensor processing into any analytical pipeline. The system is developed as part of the NSF-funded Datanet Federation Consortium (datafed.org). APIs for selecting, opening, reaping and closing sensor streams are provided, along with other helper functions to associate metadata and convert sensor packets into NetCDF and JSON formats. Near real-time sensor data including seismic sensors, environmental sensors, LIDAR and video streams are available through this interface. A system for archiving sensor data and metadata in NetCDF format has been implemented and will be demonstrated at AGU.
Active Sensing System with In Situ Adjustable Sensor Morphology
Nurzaman, Surya G.; Culha, Utku; Brodbeck, Luzius; Wang, Liyu; Iida, Fumiya
2013-01-01
Background Despite the widespread use of sensors in engineering systems like robots and automation systems, the common paradigm is to have fixed sensor morphology tailored to fulfill a specific application. On the other hand, robotic systems are expected to operate in ever more uncertain environments. In order to cope with the challenge, it is worthy of note that biological systems show the importance of suitable sensor morphology and active sensing capability to handle different kinds of sensing tasks with particular requirements. Methodology This paper presents a robotics active sensing system which is able to adjust its sensor morphology in situ in order to sense different physical quantities with desirable sensing characteristics. The approach taken is to use thermoplastic adhesive material, i.e. Hot Melt Adhesive (HMA). It will be shown that the thermoplastic and thermoadhesive nature of HMA enables the system to repeatedly fabricate, attach and detach mechanical structures with a variety of shape and size to the robot end effector for sensing purposes. Via active sensing capability, the robotic system utilizes the structure to physically probe an unknown target object with suitable motion and transduce the arising physical stimuli into information usable by a camera as its only built-in sensor. Conclusions/Significance The efficacy of the proposed system is verified based on two results. Firstly, it is confirmed that suitable sensor morphology and active sensing capability enables the system to sense different physical quantities, i.e. softness and temperature, with desirable sensing characteristics. Secondly, given tasks of discriminating two visually indistinguishable objects with respect to softness and temperature, it is confirmed that the proposed robotic system is able to autonomously accomplish them. The way the results motivate new research directions which focus on in situ adjustment of sensor morphology will also be discussed. PMID:24416094
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
NASA Astrophysics Data System (ADS)
Shahini Shamsabadi, Salar
A web-based PAVEment MONitoring system, PAVEMON, is a GIS oriented platform for accommodating, representing, and leveraging data from a multi-modal mobile sensor system. Stated sensor system consists of acoustic, optical, electromagnetic, and GPS sensors and is capable of producing as much as 1 Terabyte of data per day. Multi-channel raw sensor data (microphone, accelerometer, tire pressure sensor, video) and processed results (road profile, crack density, international roughness index, micro texture depth, etc.) are outputs of this sensor system. By correlating the sensor measurements and positioning data collected in tight time synchronization, PAVEMON attaches a spatial component to all the datasets. These spatially indexed outputs are placed into an Oracle database which integrates seamlessly with PAVEMON's web-based system. The web-based system of PAVEMON consists of two major modules: 1) a GIS module for visualizing and spatial analysis of pavement condition information layers, and 2) a decision-support module for managing maintenance and repair (Mℝ) activities and predicting future budget needs. PAVEMON weaves together sensor data with third-party climate and traffic information from the National Oceanic and Atmospheric Administration (NOAA) and Long Term Pavement Performance (LTPP) databases for an organized data driven approach to conduct pavement management activities. PAVEMON deals with heterogeneous and redundant observations by fusing them for jointly-derived higher-confidence results. A prominent example of the fusion algorithms developed within PAVEMON is a data fusion algorithm used for estimating the overall pavement conditions in terms of ASTM's Pavement Condition Index (PCI). PAVEMON predicts PCI by undertaking a statistical fusion approach and selecting a subset of all the sensor measurements. Other fusion algorithms include noise-removal algorithms to remove false negatives in the sensor data in addition to fusion algorithms developed for identifying features on the road. PAVEMON offers an ideal research and monitoring platform for rapid, intelligent and comprehensive evaluation of tomorrow's transportation infrastructure based on up-to-date data from heterogeneous sensor systems.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
Fixation light hue bias revisited: implications for using adaptive optics to study color vision.
Hofer, H J; Blaschke, J; Patolia, J; Koenig, D E
2012-03-01
Current vision science adaptive optics systems use near infrared wavefront sensor 'beacons' that appear as red spots in the visual field. Colored fixation targets are known to influence the perceived color of macroscopic visual stimuli (Jameson, D., & Hurvich, L. M. (1967). Fixation-light bias: An unwanted by-product of fixation control. Vision Research, 7, 805-809.), suggesting that the wavefront sensor beacon may also influence perceived color for stimuli displayed with adaptive optics. Despite its importance for proper interpretation of adaptive optics experiments on the fine scale interaction of the retinal mosaic and spatial and color vision, this potential bias has not yet been quantified or addressed. Here we measure the impact of the wavefront sensor beacon on color appearance for dim, monochromatic point sources in five subjects. The presence of the beacon altered color reports both when used as a fixation target as well as when displaced in the visual field with a chromatically neutral fixation target. This influence must be taken into account when interpreting previous experiments and new methods of adaptive correction should be used in future experiments using adaptive optics to study color. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.
Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun
2017-01-17
The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.
Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT
NASA Technical Reports Server (NTRS)
Maxwell, Thomas
2012-01-01
Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.
Program Helps Generate And Manage Graphics
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.
New Research Methods Developed for Studying Diabetic Foot Ulceration
NASA Technical Reports Server (NTRS)
1998-01-01
Dr. Brian Davis, one of the Cleveland Clinic Foundation's researchers, has been investigating the risk factors related to diabetic foot ulceration, a problem that accounts for 20 percent of all hospital admissions for diabetic patients. He had developed a sensor pad to measure the friction and pressure forces under a person's foot when walking. As part of NASA Lewis Research Center's Space Act Agreement with the Cleveland Clinic Foundation, Dr. Davis requested Lewis' assistance in visualizing the data from the sensor pad. As a result, Lewis' Interactive Data Display System (IDDS) was installed at the Cleveland Clinic. This computer graphics program is normally used to visualize the flow of air through aircraft turbine engines, producing color two- and three-dimensional images.
A Visual Analytics Approach for Station-Based Air Quality Data
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-01-01
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117
A Visual Analytics Approach for Station-Based Air Quality Data.
Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui
2016-12-24
With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.
Innovative Pressure Sensor Platform and Its Integration with an End-User Application
Flores-Caballero, Antonio; Copaci, Dorin; Blanco, María Dolores; Moreno, Luis; Herrán, Jaime; Fernández, Iván; Ochoteco, Estíbaliz; Cabañero, German; Grande, Hans
2014-01-01
This paper describes the fully integration of an innovative and low-cost pressure sensor sheet based on a bendable and printed electronics technology. All integration stages are covered, from most low-level functional system, like physical analog sensor data acquisition, followed by embedded data processing, to end user interactive visual application. Data acquisition embedded software and hardware was developed using a Rapid Control Prototyping (RCP). Finally, after first electronic prototype successful testing, a Taylor-made electronics was developed, reducing electronics volume to 3.5 cm × 6 cm × 2 cm with a maximum power consumption of 765 mW for both electronics and pressure sensor sheet. PMID:24922455
Shahzad, Aamir; Landry, René; Lee, Malrey; Xiong, Naixue; Lee, Jongho; Lee, Changhoon
2016-01-01
Substantial changes have occurred in the Information Technology (IT) sectors and with these changes, the demand for remote access to field sensor information has increased. This allows visualization, monitoring, and control through various electronic devices, such as laptops, tablets, i-Pads, PCs, and cellular phones. The smart phone is considered as a more reliable, faster and efficient device to access and monitor industrial systems and their corresponding information interfaces anywhere and anytime. This study describes the deployment of a protocol whereby industrial system information can be securely accessed by cellular phones via a Supervisory Control And Data Acquisition (SCADA) server. To achieve the study goals, proprietary protocol interconnectivity with non-proprietary protocols and the usage of interconnectivity services are considered in detail. They support the visualization of the SCADA system information, and the related operations through smart phones. The intelligent sensors are configured and designated to process real information via cellular phones by employing information exchange services between the proprietary protocol and non-proprietary protocols. SCADA cellular access raises the issue of security flaws. For these challenges, a cryptography-based security method is considered and deployed, and it could be considered as a part of a proprietary protocol. Subsequently, transmission flows from the smart phones through a cellular network. PMID:27314351
Shahzad, Aamir; Landry, René; Lee, Malrey; Xiong, Naixue; Lee, Jongho; Lee, Changhoon
2016-06-14
Substantial changes have occurred in the Information Technology (IT) sectors and with these changes, the demand for remote access to field sensor information has increased. This allows visualization, monitoring, and control through various electronic devices, such as laptops, tablets, i-Pads, PCs, and cellular phones. The smart phone is considered as a more reliable, faster and efficient device to access and monitor industrial systems and their corresponding information interfaces anywhere and anytime. This study describes the deployment of a protocol whereby industrial system information can be securely accessed by cellular phones via a Supervisory Control And Data Acquisition (SCADA) server. To achieve the study goals, proprietary protocol interconnectivity with non-proprietary protocols and the usage of interconnectivity services are considered in detail. They support the visualization of the SCADA system information, and the related operations through smart phones. The intelligent sensors are configured and designated to process real information via cellular phones by employing information exchange services between the proprietary protocol and non-proprietary protocols. SCADA cellular access raises the issue of security flaws. For these challenges, a cryptography-based security method is considered and deployed, and it could be considered as a part of a proprietary protocol. Subsequently, transmission flows from the smart phones through a cellular network.
A qualitative review for wireless health monitoring system
NASA Astrophysics Data System (ADS)
Arshad, Atika; Fadzil Ismail, Ahmad; Khan, Sheroz; Zahirul Alam, A. H. M.; Tasnim, Rumana; Samnan Haider, Syed; Shobaki, Mohammed M.; Shahid, Zeeshan
2013-12-01
A proliferating interest has been being observed over the past years in accurate wireless system development in order to monitor incessant human activities in health care centres. Furthermore because of the swelling number of elderly population and the inadequate number of competent staffs for nursing homes there is a big market petition for health care monitoring system. In order to detect human researchers developed different methods namely which include Field Identification technique, Visual Sensor Network, radar detection, e-mobile techniques and so on. An all-encompassing overview of the non-wired human detection application advancement is presented in this paper. Inductive links are used for human detection application while wiring an electronic system has become impractical in recent times. Keeping in mind the shortcomings, an Inductive Intelligent Sensor (IIS) has been proposed as a novel human monitoring system for future implementation. The proposed sensor works towards exploring the signature signals of human body movement and size. This proposed sensor is fundamentally based on inductive loop that senses the presence and a passing human resulting an inductive change.
NASA Astrophysics Data System (ADS)
Celicourt, P.; Sam, R.; Piasecki, M.
2016-12-01
Global phenomena such as climate change and large scale environmental degradation require the collection of accurate environmental data at detailed spatial and temporal scales from which knowledge and actionable insights can be derived using data science methods. Despite significant advances in sensor network technologies, sensors and sensor network deployment remains a labor-intensive, time consuming, cumbersome and expensive task. These factors demonstrate why environmental data collection remains a challenge especially in developing countries where technical infrastructure, expertise and pecuniary resources are scarce. In addition, they also demonstrate the reason why dense and long-term environmental data collection has been historically quite difficult. Moreover, hydrometeorological data collection efforts usually overlook the (critically important) inclusion of a standards-based system for storing, managing, organizing, indexing, documenting and sharing sensor data. We are developing a cross-platform software framework using the Python programming language that will allow us to develop a low cost end-to-end (from sensor to publication) system for hydrometeorological conditions monitoring. The software framework contains provision for sensor, sensor platforms, calibration and network protocols description, sensor programming, data storage, data publication and visualization and more importantly data retrieval in a desired unit system. It is being tested on the Raspberry Pi microcomputer as end node and a laptop PC as the base station in a wireless setting.
Intelligent imaging systems for automotive applications
NASA Astrophysics Data System (ADS)
Thompson, Chris; Huang, Yingping; Fu, Shan
2004-03-01
In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issues.
Elmannai, Wafa; Elleithy, Khaled
2017-01-01
The World Health Organization (WHO) reported that there are 285 million visually-impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visually-impaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people. PMID:28287451
Nano-based sensor for assessment of weaponry structural degradation
NASA Astrophysics Data System (ADS)
Brantley, Christina L.; Edwards, Eugene; Ruffin, Paul B.; Kranz, Michael
2016-04-01
Missiles and weaponry-based systems are composed of metal structures that can degrade after prolonged exposure to environmental elements. A particular concern is accumulation of corrosion that generally results from prolonged environmental exposure. Corrosion, defined as the unintended destruction or deterioration of a material due to its interaction with the environment, can negatively affect both equipment and infrastructure. System readiness and safety can be reduced if corrosion is not detected, prevented and managed. The current corrosion recognition methods (Visual, Radiography, Ultrasonics, Eddy Current, and Thermography) are expensive and potentially unreliable. Visual perception is the most commonly used method for determining corrosion in metal. Utilization of an inductance-based sensor system is being proposed as part of the authors' research. Results from this research will provide a more efficient, economical, and non-destructive sensing approach. Preliminary results demonstrate a highly linear degradation within a corrosive environment due to the increased surface area available on the sensor coupon. The inductance of the devices, which represents a volume property of the coupon, demonstrated sensitivity to corrosion levels. The proposed approach allows a direct mass-loss measurement based on the change in the inductance of the coupon when placed in an alternating magnetic field. Prototype devices have demonstrated highly predictable corrosion rates that are easily measured using low-power small electronic circuits and energy harvesting methods to interrogate the sensor. Preliminary testing demonstrates that the device concept is acceptable and future opportunities for use in low power embedded applications are achievable. Key results in this paper include the assessment of typical Army corrosion cost, degradation patterns of varying metal materials, and application of wireless sensors elements.
A real-time detector system for precise timing of audiovisual stimuli.
Henelius, Andreas; Jagadeesan, Sharman; Huotilainen, Minna
2012-01-01
The successful recording of neurophysiologic signals, such as event-related potentials (ERPs) or event-related magnetic fields (ERFs), relies on precise information of stimulus presentation times. We have developed an accurate and flexible audiovisual sensor solution operating in real-time for on-line use in both auditory and visual ERP and ERF paradigms. The sensor functions independently of the used audio or video stimulus presentation tools or signal acquisition system. The sensor solution consists of two independent sensors; one for sound and one for light. The microcontroller-based audio sensor incorporates a novel approach to the detection of natural sounds such as multipart audio stimuli, using an adjustable dead time. This aids in producing exact markers for complex auditory stimuli and reduces the number of false detections. The analog photosensor circuit detects changes in light intensity on the screen and produces a marker for changes exceeding a threshold. The microcontroller software for the audio sensor is free and open source, allowing other researchers to customise the sensor for use in specific auditory ERP/ERF paradigms. The hardware schematics and software for the audiovisual sensor are freely available from the webpage of the authors' lab.
Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment
Pouke, Matti; Häkkilä, Jonna
2013-01-01
Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747
Situation exploration in a persistent surveillance system with multidimensional data
NASA Astrophysics Data System (ADS)
Habibi, Mohammad S.
2013-03-01
There is an emerging need for fusing hard and soft sensor data in an efficient surveillance system to provide accurate estimation of situation awareness. These mostly abstract, multi-dimensional and multi-sensor data pose a great challenge to the user in performing analysis of multi-threaded events efficiently and cohesively. To address this concern an interactive Visual Analytics (VA) application is developed for rapid assessment and evaluation of different hypotheses based on context-sensitive ontology spawn from taxonomies describing human/human and human/vehicle/object interactions. A methodology is described here for generating relevant ontology in a Persistent Surveillance System (PSS) and demonstrates how they can be utilized in the context of PSS to track and identify group activities pertaining to potential threats. The proposed VA system allows for visual analysis of raw data as well as metadata that have spatiotemporal representation and content-based implications. Additionally in this paper, a technique for rapid search of tagged information contingent to ranking and confidence is explained for analysis of multi-dimensional data. Lastly the issue of uncertainty associated with processing and interpretation of heterogeneous data is also addressed.
Gamma/x-ray linear pushbroom stereo for 3D cargo inspection
NASA Astrophysics Data System (ADS)
Zhu, Zhigang; Hu, Yu-Chi
2006-05-01
For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.
Cheung, Weng-Fong; Lin, Tzu-Hsuan; Lin, Yu-Cheng
2018-01-01
In recent years, many studies have focused on the application of advanced technology as a way to improve management of construction safety management. A Wireless Sensor Network (WSN), one of the key technologies in Internet of Things (IoT) development, enables objects and devices to sense and communicate environmental conditions; Building Information Modeling (BIM), a revolutionary technology in construction, integrates database and geometry into a digital model which provides a visualized way in all construction lifecycle management. This paper integrates BIM and WSN into a unique system which enables the construction site to visually monitor the safety status via a spatial, colored interface and remove any hazardous gas automatically. Many wireless sensor nodes were placed on an underground construction site and to collect hazardous gas level and environmental condition (temperature and humidity) data, and in any region where an abnormal status is detected, the BIM model will alert the region and an alarm and ventilator on site will start automatically for warning and removing the hazard. The proposed system can greatly enhance the efficiency in construction safety management and provide an important reference information in rescue tasks. Finally, a case study demonstrates the applicability of the proposed system and the practical benefits, limitations, conclusions, and suggestions are summarized for further applications. PMID:29393887
Human factor roles in design of teleoperator systems
NASA Technical Reports Server (NTRS)
Janow, C.; Malone, T. B.
1973-01-01
Teleoperator systems are considered, giving attention to types of teleoperators, a manned space vehicle attached manipulator, a free-flying teleoperator, a surface exploration roving vehicle, the human factors role in total system design, the manipulator system, the sensor system, the communication system, the control system, and the mobility system. The role of human factors in the development of teleoperator systems is also discussed, taking into account visual systems, an operator control station, and the manipulators.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Reeder, B; Chung, J; Le, T; Thompson, H; Demiris, G
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records". Our objectives were to: 1) characterize older adult participants' perceived usefulness of in-home sensor data and 2) develop novel visual displays for sensor data from Ambient Assisted Living environments that can become part of electronic health records. Semi-structured interviews were conducted with community-dwelling older adult participants during three and six-month visits. We engaged participants in two design iterations by soliciting feedback about display types and visual displays of simulated data related to a fall scenario. Interview transcripts were analyzed to identify themes related to perceived usefulness of sensor data. Thematic analysis identified three themes: perceived usefulness of sensor data for managing health; factors that affect perceived usefulness of sensor data and; perceived usefulness of visual displays. Visual displays were cited as potentially useful for family members and health care providers. Three novel visual displays were created based on interview results, design guidelines derived from prior AAL research, and principles of graphic design theory. Participants identified potential uses of personal activity data for monitoring health status and capturing early signs of illness. One area for future research is to determine how visual displays of AAL data might be utilized to connect family members and health care providers through shared understanding of activity levels versus a more simplified view of self-management. Connecting informal and formal caregiving networks may facilitate better communication between older adults, family members and health care providers for shared decision-making.
A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-07-03
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
Remote Power Systems for Sensors on the Northern Border
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, Lin J; Kandt, Alicen J
The National Renewable Energy Laboratory (NREL) is working with the Department of Homeland Security (DHS) [1] to field sensors that accurately track different types of transportation across the northern border of the U.S.. To do this, the sensors require remote power so that they can be placed in the most advantageous geographical locations, often where no grid power is available. This enables the sensors to detect and track aircraft/vehicles despite natural features (e.g., mountains, ridges, valleys, trees) that often prevent standard methods (e.g., monostatic radar or visual observers) from detecting them. Without grid power, portable power systems were used tomore » provide between 80 and 300 W continuously, even in bitter cold and when buried under feet of snow/ice. NREL provides details about the design, installation, and lessons learned from long-term deployment of a second-generation of novel power systems that used adjustable-angle photovoltaics (PV), lithium ion batteries, and fuel cells that provide power to achieve 100% up-time.« less
Distance-Dependent Multimodal Image Registration for Agriculture Tasks
Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad
2015-01-01
Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000
Standardized Photometric Calibrations for Panchromatic SSA Sensors
NASA Astrophysics Data System (ADS)
Castro, P.; Payne, T.; Battle, A.; Cole, Z.; Moody, J.; Gregory, S.; Dao, P.
2016-09-01
Panchromatic sensors used for Space Situational Awareness (SSA) have no standardized method for transforming the net flux detected by a CCD without a spectral filter into an exo-atmospheric magnitude in a standard magnitude system. Each SSA data provider appears to have their own method for computing the visual magnitude based on panchromatic brightness making cross-comparisons impossible. We provide a procedure in order to standardize the calibration of panchromatic sensors for the purposes of SSA. A technique based on theoretical modeling is presented that derives standard panchromatic magnitudes from the Johnson-Cousins photometric system defined by Arlo Landolt. We verify this technique using observations of Landolt standard stars and a Vega-like star to determine empirical panchromatic magnitudes and compare these to synthetically derived panchromatic magnitudes. We also investigate color terms caused by differences in the quantum efficiency (QE) between the Landolt standard system and panchromatic systems. We evaluate calibrated panchromatic satellite photometry by observing several GEO satellites and standard stars using three different sensors. We explore the effect of satellite color terms by comparing the satellite signatures. In order to remove other variables affecting the satellite photometry, two of the sensors are at the same site using different CCDs. The third sensor is geographically separate from the first two allowing for a definitive test of calibrated panchromatic satellite photometry.
A teleoperated system for remote site characterization
NASA Technical Reports Server (NTRS)
Sandness, Gerald A.; Richardson, Bradley S.; Pence, Jon
1994-01-01
The detection and characterization of buried objects and materials is an important step in the restoration of burial sites containing chemical and radioactive waste materials at Department of Energy (DOE) and Department of Defense (DOD) facilities. By performing these tasks with remotely controlled sensors, it is possible to obtain improved data quality and consistency as well as enhanced safety for on-site workers. Therefore, the DOE Office of Technology Development and the US Army Environmental Center have jointly supported the development of the Remote Characterization System (RCS). One of the main components of the RCS is a small remotely driven survey vehicle that can transport various combinations of geophysical and radiological sensors. Currently implemented sensors include ground-penetrating radar, magnetometers, an electromagnetic induction sensor, and a sodium iodide radiation detector. The survey vehicle was constructed predominantly of non-metallic materials to minimize its effect on the operation of its geophysical sensors. The system operator controls the vehicle from a remote, truck-mounted, base station. Video images are transmitted to the base station by a radio link to give the operator necessary visual information. Vehicle control commands, tracking information, and sensor data are transmitted between the survey vehicle and the base station by means of a radio ethernet link. Precise vehicle tracking coordinates are provided by a differential Global Positioning System (GPS).
Park, Heun; Kim, Dong Sik; Hong, Soo Yeong; Kim, Chulmin; Yun, Jun Yeong; Oh, Seung Yun; Jin, Sang Woo; Jeong, Yu Ra; Kim, Gyu Tae; Ha, Jeong Sook
2017-06-08
In this study, we report on the development of a stretchable, transparent, and skin-attachable strain sensor integrated with a flexible electrochromic device as a human skin-inspired interactive color-changing system. The strain sensor consists of a spin-coated conductive nanocomposite film of poly(vinyl alcohol)/multi-walled carbon nanotube/poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) on a polydimethylsiloxane substrate. The sensor exhibits excellent performance of high sensitivity, high durability, fast response, and high transparency. An electrochromic device (ECD) made of electrochemically synthesized polyaniline nanofibers and V 2 O 5 on an indium-tin-oxide-coated polyethylene terephthalate film experiences a change in color from yellow to dark blue on application of voltage. The strain sensor and ECD are integrated on skin via an Arduino circuit for an interactive color change with the variation of the applied strain, which enables a real-time visual display of body motion. This integrated system demonstrates high potential for use in interactive wearable devices, military applications, and smart robots.
Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor
NASA Astrophysics Data System (ADS)
Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso
2018-04-01
Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.
The Design of an Autonomous Underwater Vehicle for Water Quality Monitoring
NASA Astrophysics Data System (ADS)
Li, Yulong; Liu, Rong; Liu, Shujin
2018-01-01
This paper describes the development of a civilian-used autonomous underwater vehicle (AUV) for water quality monitoring at reservoirs and watercourses that can obtain realtime visual and locational information. The mechanical design was completed with CAD software Solidworks. Four thrusters—two horizontal and two vertical—on board enable the vehicle to surge, heave, yaw, and pitch. A specialized water sample collection compartment is designed to perform water collection at target locations. The vehicle has a central controller—STM32—and a sub-coordinate controller—Arduino MEGA 2560—that coordinates multiple sensors including an inertial sensor, ultrasonic sensors, etc. Global Navigation Satellite System (GNSS) and the inertial sensor enable the vehicle’s localization. Remote operators monitor and control the vehicle via a host computer system. Operators choose either semi-autonomous mode in which they set target locations or manual mode. The experimental results show that the vehicle is able to perform well in either mode.
Visualizing Motion Patterns in Acupuncture Manipulation.
Lee, Ye-Seul; Jung, Won-Mo; Lee, In-Seon; Lee, Hyangsook; Park, Hi-Joon; Chae, Younbyoung
2016-07-16
Acupuncture manipulation varies widely among practitioners in clinical settings, and it is difficult to teach novice students how to perform acupuncture manipulation techniques skillfully. The Acupuncture Manipulation Education System (AMES) is an open source software system designed to enhance acupuncture manipulation skills using visual feedback. Using a phantom acupoint and motion sensor, our method for acupuncture manipulation training provides visual feedback regarding the actual movement of the student's acupuncture manipulation in addition to the optimal or intended movement, regardless of whether the manipulation skill is lifting, thrusting, or rotating. Our results show that students could enhance their manipulation skills by training using this method. This video shows the process of manufacturing phantom acupoints and discusses several issues that may require the attention of individuals interested in creating phantom acupoints or operating this system.
Polar exponential sensor arrays unify iconic and Hough space representation
NASA Technical Reports Server (NTRS)
Weiman, Carl F. R.
1990-01-01
The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here.
NASA Technical Reports Server (NTRS)
Diak, George R.; Huang, Hung-Lung; Kim, Dongsoo
1990-01-01
The paper addresses the concept of synthetic satellite imagery as a visualization and diagnostic tool for understanding satellite sensors of the future and to detail preliminary results on the quality of soundings from the current sensors. Preliminary results are presented on the quality of soundings from the combination of the High-Resolution Infrared Radiometer Sounder and the Advanced Microwave Sounding Unit. Results are also presented on the first Observing System Simulation Experiment using this data in a mesoscale numerical prediction model.
NASA Astrophysics Data System (ADS)
Kavanagh, K.; Davis, A.; Gessler, P.; Hess, H.; Holden, Z.; Link, T. E.; Newingham, B. A.; Smith, A. M.; Robinson, P.
2011-12-01
Developing sensor networks that are robust enough to perform in the world's remote regions is critical since these regions serve as important benchmarks compared to human-dominated areas. Paradoxically, the factors that make these remote, natural sites challenging for sensor networking are often what make them indispensable for climate change research. We aim to overcome these challenges by developing a three-dimensional sensor network arrayed across a topoclimatic gradient (1100-1800 meters) in a wilderness area in central Idaho. Development of this sensor array builds upon advances in sensing, networking, and power supply technologies coupled with experiences of the multidisciplinary investigators in conducting research in remote mountainous locations. The proposed gradient monitoring network will provide near real-time data from a three-dimensional (3-D) array of sensors measuring biophysical parameters used in ecosystem process models. The network will monitor atmospheric carbon dioxide concentration, humidity, air and soil temperature, soil water content, precipitation, incoming and outgoing shortwave and longwave radiation, snow depth, wind speed and direction, tree stem growth and leaf wetness at time intervals ranging from seconds to days. The long-term goal of this project is to realize a transformative integration of smart sensor networks adaptively communicating data in real-time to ultimately achieve a 3-D visualization of ecosystem processes within remote mountainous regions. Process models will be the interface between the visualization platforms and the sensor network. This will allow us to better predict how non-human dominated terrestrial and aquatic ecosystems function and respond to climate dynamics. Access to the data will be ensured as part of the Northwest Knowledge Network being developed at the University of Idaho, through ongoing Idaho NSF-funded cyber infrastructure initiatives, and existing data management systems funded by NSF, such as the CUAHSI Hydrologic Information System (HIS). These efforts will enhance cross-disciplinary understanding of natural and anthropogenic influences on ecosystem function and ultimately inform decision-making.
NASA Astrophysics Data System (ADS)
Ciurapiński, Wieslaw; Dulski, Rafal; Kastek, Mariusz; Szustakowski, Mieczyslaw; Bieszczad, Grzegorz; Życzkowski, Marek; Trzaskawka, Piotr; Piszczek, Marek
2009-09-01
The paper presents the concept of multispectral protection system for perimeter protection for stationary and moving objects. The system consists of active ground radar, thermal and visible cameras. The radar allows the system to locate potential intruders and to control an observation area for system cameras. The multisensor construction of the system ensures significant improvement of detection probability of intruder and reduction of false alarms. A final decision from system is worked out using image data. The method of data fusion used in the system has been presented. The system is working under control of FLIR Nexus system. The Nexus offers complete technology and components to create network-based, high-end integrated systems for security and surveillance applications. Based on unique "plug and play" architecture, system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provides high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering.
Zanimonskiy, Yevgen M.; Yampolski, Yuri M.; Figurski, Mariusz
2017-01-01
The technique of the orthogonal projection of ionosphere electronic content variations for mapping total electron content (TEC) allows us to visualize ionospheric irregularities. For the reconstruction of global ionospheric characteristics, numerous global navigation satellite system (GNSS) receivers located in different regions of the Earth are used as sensors. We used dense GNSS networks in central Europe to detect and investigate a special type of plasma inhomogeneities, called travelling ionospheric disturbances (TID). Such use of GNSS sensors allows us to reconstruct the main TID parameters, such as spatial dimensions, velocities, and directions of their movement. The paper gives examples of the restoration of dynamic characteristics of ionospheric irregularities for quiet and disturbed geophysical conditions. Special attention is paid to the dynamics of ionospheric disturbances stimulated by the magnetic storms of two St. Patrick’s Days (17 March 2013 and 2015). Additional opportunities for the remote sensing of the ionosphere with the use of dense regional networks of GNSS receiving sensors have been noted too. PMID:28994718
Nykiel, Grzegorz; Zanimonskiy, Yevgen M; Yampolski, Yuri M; Figurski, Mariusz
2017-10-10
The technique of the orthogonal projection of ionosphere electronic content variations for mapping total electron content (TEC) allows us to visualize ionospheric irregularities. For the reconstruction of global ionospheric characteristics, numerous global navigation satellite system (GNSS) receivers located in different regions of the Earth are used as sensors. We used dense GNSS networks in central Europe to detect and investigate a special type of plasma inhomogeneities, called travelling ionospheric disturbances (TID). Such use of GNSS sensors allows us to reconstruct the main TID parameters, such as spatial dimensions, velocities, and directions of their movement. The paper gives examples of the restoration of dynamic characteristics of ionospheric irregularities for quiet and disturbed geophysical conditions. Special attention is paid to the dynamics of ionospheric disturbances stimulated by the magnetic storms of two St. Patrick's Days (17 March 2013 and 2015). Additional opportunities for the remote sensing of the ionosphere with the use of dense regional networks of GNSS receiving sensors have been noted too.
Local Positioning System Using Flickering Infrared LEDs
Raharijaona, Thibaut; Mawonou, Rodolphe; Nguyen, Thanh Vu; Colonnier, Fabien; Boyron, Marc; Diperi, Julien; Viollet, Stéphane
2017-01-01
A minimalistic optical sensing device for the indoor localization is proposed to estimate the relative position between the sensor and active markers using amplitude modulated infrared light. The innovative insect-based sensor can measure azimuth and elevation angles with respect to two small and cheap active infrared light emitting diodes (LEDs) flickering at two different frequencies. In comparison to a previous lensless visual sensor that we proposed for proximal localization (less than 30 cm), we implemented: (i) a minimalistic sensor in terms of small size (10 cm3), light weight (6 g) and low power consumption (0.4 W); (ii) an Arduino-compatible demodulator for fast analog signal processing requiring low computational resources; and (iii) an indoor positioning system for a mobile robotic application. Our results confirmed that the proposed sensor was able to estimate the position at a distance of 2 m with an accuracy as small as 2-cm at a sampling frequency of 100 Hz. Our sensor can be also suitable to be implemented in a position feedback loop for indoor robotic applications in GPS-denied environment. PMID:29099743
Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force
NASA Astrophysics Data System (ADS)
Muis, Abdul; Ohnishi, Kouhei
Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.
Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter
Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun
2017-01-01
The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity. PMID:28106716
2011-03-09
anu.edu.au Nocturnal visual orientation in flying insects: a benchmark for the design of vision-based sensors in Micro-Aerial Vehicles Report...9 10 Technical horizon sensors Over the past few years, a remarkable proliferation of designs for micro-aerial vehicles (MAVs) has occurred...possible elevations, it may severely degrade the performance of sensors by local saturation. Therefore it is necessary to find a method whereby the effect
Pereira, G M; Heins, B J; Endres, M I
2018-03-01
The objective of this study was to validate an ear-tag accelerometer sensor (CowManager SensOor, Agis Automatisering BV, Harmelen, the Netherlands) using direct visual observations in a grazing dairy herd. Lactating crossbred cows (n = 24) were used for this experiment at the University of Minnesota West Central Research and Outreach Center grazing dairy (Morris, MN) during the summer of 2016. A single trained observer recorded behavior every minute for 6 h for each cow (24 cows × 6 h = 144 h of observation total). Direct visual observation was compared with sensor data during August and September 2016. The sensor detected and identified ear and head movements, and through algorithms the sensor classified each minute as one of the following behaviors: rumination, eating, not active, active, and high active. A 2-sided t-test was conducted with PROC TTEST of SAS (SAS Institute Inc., Cary, NC) to compare the percentage of time each cow's behavior was recorded by direct visual observation and sensor data. For total recorded time, the percentage of time of direct visual observation compared with sensor data was 17.9 and 19.1% for rumination, 52.8 and 51.9% for eating, 17.4 and 11.9% for not active, and 7.9 and 21.1% for active. Pearson correlations (PROC CORR of SAS) were used to evaluate associations between direct visual observations and sensor data. Furthermore, concordance correlation coefficient (CCC), bias correction factors, location shift, and scale shift (epiR package of R version 3.3.1; R Foundation for Statistical Computing, Vienna, Austria) were calculated to provide a measure of accuracy and precision. Correlations between visual observations for all 4 behaviors were highly to weakly correlated (rumination: r = 0.72, CCC = 0.71; eating: r = 0.88, CCC = 0.88; not active: r = 0.65, CCC = 0.52; and active: r = 0.20, CCC = 0.19) compared with sensor data. The results suggest that the sensor accurately monitors rumination and eating behavior of grazing dairy cattle. However, active behaviors may be more difficult for the sensor to record than others. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Pein, Miriam; Eckert, Carolin; Preis, Maren; Breitkreutz, Jörg
2013-09-01
Performance qualification (PQ) of taste sensing systems is mandatory for their use in pharmaceutical industry. According to ICH Q2 (R1) and a recent adaptation for taste sensing systems, non-specificity, log-linear relationships between the concentration of analytes and the sensor signal as well as a repeatability with relative standard deviation (RSD) values <4% were defined as basic requirements to pass a PQ. In the present work, the αAstree taste sensing system led to a successful PQ procedure by the use of recent sensor batches for pharmaceutical applications (sensor set #2) and a modified measurement protocol. Log-linear relationships between concentration and responses of each sensor were investigated for different bitter tasting active pharmaceutical ingredients (APIs). Using the new protocol, RSD values <2.1% were obtained in the repeatability study. Applying the visual evaluation approach, detection and quantitation limit could be determined for caffeine citrate with every sensor (LOD 0.05-0.5 mM, LOQ: 0.1-0.5 mM). In addition, the sensor set marketed for food applications (sensor set #5) was proven to show beneficial effects regarding the log-linear relationship between the concentration of quinine hydrochloride and the sensor signal. By the use of our proposed protocol, it is possible to implement the αAstree taste sensing system as a tool to assure quality control in the pharmaceutical industry. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Makarov, V.; Korelin, O.; Koblyakov, D.; Kostin, S.; Komandirov, A.
2018-02-01
The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road signs; Warning the driver about the possibility of a frontal collision; Control of "blind" zones; "Transparent" vision in the windshield racks, widening the field of view, behind them; Visual and sound information about the traffic situation; Control and descent from the lane of the vehicle; Monitoring of the driver’s condition; navigation system; All-round review. The scheme of action of sensors of the developed system of visual information of the driver is provided. The moments of systems on a prototype of a vehicle are considered. Possible changes in the interior and dashboard of the car are given. The results of the implementation are aimed at the implementation of the system - improved informing of the driver about the environment and the development of an ergonomic interior for this system within the new Functional Salon of the Gazelle Next vehicle equipped with a visual information system for the driver.
Concept of electro-optical sensor module for sniper detection system
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz
2010-10-01
The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.
Visualizing Sound Directivity via Smartphone Sensors
NASA Astrophysics Data System (ADS)
Hawley, Scott H.; McClain, Robert E.
2018-02-01
When Yang-Hann Kim received the Rossing Prize in Acoustics Education at the 2015 meeting of the Acoustical Society of America, he stressed the importance of offering visual depictions of sound fields when teaching acoustics. Often visualization methods require specialized equipment such as microphone arrays or scanning apparatus. We present a simple method for visualizing angular dependence in sound fields, made possible via the confluence of sensors available via a new smartphone app that the authors have developed.
A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems
Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua
2013-01-01
A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597
Always-on low-power optical system for skin-based touchless machine control.
Lecca, Michela; Gottardi, Massimo; Farella, Elisabetta; Milosevic, Bojan
2016-06-01
Embedded vision systems are smart energy-efficient devices that capture and process a visual signal in order to extract high-level information about the surrounding observed world. Thanks to these capabilities, embedded vision systems attract more and more interest from research and industry. In this work, we present a novel low-power optical embedded system tailored to detect the human skin under various illuminant conditions. We employ the presented sensor as a smart switch to activate one or more appliances connected to it. The system is composed of an always-on low-power RGB color sensor, a proximity sensor, and an energy-efficient microcontroller (MCU). The architecture of the color sensor allows a hardware preprocessing of the RGB signal, which is converted into the rg space directly on chip reducing the power consumption. The rg signal is delivered to the MCU, where it is classified as skin or non-skin. Each time the signal is classified as skin, the proximity sensor is activated to check the distance of the detected object. If it appears to be in the desired proximity range, the system detects the interaction and switches on/off the connected appliances. The experimental validation of the proposed system on a prototype shows that processing both distance and color remarkably improves the performance of the two separated components. This makes the system a promising tool for energy-efficient, touchless control of machines.
Smart walking stick for blind people: an application of 3D printer
NASA Astrophysics Data System (ADS)
Ikbal, Md. Allama; Rahman, Faidur; Ali, Md. Ripon; Kabir, M. Hasnat; Furukawa, Hidemitsu
2017-04-01
A prototype of the smart walking stick has been designed and characterized for the people who are visually impaired. In this study, it was considered that the proposed system will alert visuallyimpaired people over the obstacles which are in front of blind people as well as the obstacles of the street such as a manhole, when the blind people are walking in the street. The proposed system was designed in two stages, i.e. hardware and software which makes the system as a complete prototype. Three ultrasonic sonar sensors were used to detect in front obstacle and street surface obstacle such as manhole. Basically the sensor transmits an electromagnetic wave which travels toward the obstacle and back to the sensor receiver. The distance between the sensor and the obstacle is calculated from the received signal. The calculated distance value is compared with the pre-defined value and determines whether the obstacle is present or not. The 3D CAD software was used to design the sensor holder. An Up-Mini 3D printer was used to print the sensor holders which were mounted on the walking stick. Therefore, the sensors were fixed in the right position. Another sensor was used for the detecting the water on the walking street. The performance for detecting the obstacles and water indicate the merit of smart walking stick.
Automated Hydrogen Gas Leak Detection System
NASA Technical Reports Server (NTRS)
1995-01-01
The Gencorp Aerojet Automated Hydrogen Gas Leak Detection System was developed through the cooperation of industry, academia, and the Government. Although the original purpose of the system was to detect leaks in the main engine of the space shuttle while on the launch pad, it also has significant commercial potential in applications for which there are no existing commercial systems. With high sensitivity, the system can detect hydrogen leaks at low concentrations in inert environments. The sensors are integrated with hardware and software to form a complete system. Several of these systems have already been purchased for use on the Ford Motor Company assembly line for natural gas vehicles. This system to detect trace hydrogen gas leaks from pressurized systems consists of a microprocessor-based control unit that operates a network of sensors. The sensors can be deployed around pipes, connectors, flanges, and tanks of pressurized systems where leaks may occur. The control unit monitors the sensors and provides the operator with a visual representation of the magnitude and locations of the leak as a function of time. The system can be customized to fit the user's needs; for example, it can monitor and display the condition of the flanges and fittings associated with the tank of a natural gas vehicle.
Underwater detection by using ultrasonic sensor
NASA Astrophysics Data System (ADS)
Bakar, S. A. A.; Ong, N. R.; Aziz, M. H. A.; Alcain, J. B.; Haimi, W. M. W. N.; Sauli, Z.
2017-09-01
This paper described the low cost implementation of hardware and software in developing the system of ultrasonic which can visualize the feedback of sound in the form of measured distance through mobile phone and monitoring the frequency of detection by using real time graph of Java application. A single waterproof transducer of JSN-SR04T had been used to determine the distance of an object based on operation of the classic pulse echo detection method underwater. In this experiment, the system was tested by placing the housing which consisted of Arduino UNO, Bluetooth module of HC-06, ultrasonic sensor and LEDs at the top of the box and the transducer was immersed in the water. The system which had been tested for detection in vertical form was found to be capable of reporting through the use of colored LEDs as indicator to the relative proximity of object distance underwater form the sensor. As a conclusion, the system can detect the presence of an object underwater within the range of ultrasonic sensor and display the measured distance onto the mobile phone and the real time graph had been successfully generated.
NASA Technical Reports Server (NTRS)
Klumpar, D. M.; Lapolla, M. V.; Horblit, B.
1995-01-01
A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.
Semantic Visualization of Wireless Sensor Networks for Elderly Monitoring
NASA Astrophysics Data System (ADS)
Stocklöw, Carsten; Kamieth, Felix
In the area of Ambient Intelligence, Wireless Sensor Networks are commonly used for user monitoring purposes like health monitoring and user localization. Existing work on visualization of wireless sensor networks focuses mainly on displaying individual nodes and logical, graph-based topologies. This way, the relation to the real-world deployment is lost. This paper presents a novel approach for visualization of wireless sensor networks and interaction with complex services on the nodes. The environment is realized as a 3D model, and multiple nodes, that are worn by a single individual, are grouped together to allow an intuitive interface for end users. We describe application examples and show that our approach allows easier access to network information and functionality by comparing it with existing solutions.
A novel visual-inertial monocular SLAM
NASA Astrophysics Data System (ADS)
Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo
2018-02-01
With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.
Low Frequency Radar Sensor Observations of Tropical Forests in the Panama Canal Area
NASA Technical Reports Server (NTRS)
Imhoff, M. L.; Lawrence, W.; Condit, R.; Wright, J.; Johnson, P.; Hyer, J.; May, L.; Carson, S.; Smith, David E. (Technical Monitor)
2000-01-01
A synthetic aperture radar sensor operating in 5 bands between 80 and 120 MHz was flown over forested areas in the canal zone of the Republic of Panama in an experiment to measure biomass in heavy tropical forests. The sensor is a pulse coherent SAR flown on a small aircraft and oriented straight down. The doppler history is processed to collect data on the ground in rectangular cells of varying size over a range of incidence angles fore and aft of nadir (+45 to - 45 degrees). Sensor data consists of 5 frequency bands with 20 incidence angles per band. Sensor data for over 12+ sites were collected with forest stands having biomass densities ranging from 50 to 300 tons/ha dry above ground biomass. Results are shown exploring the biomass saturation thresholds using these frequencies, the system design is explained, and preliminary attempts at data visualization using this unique sensor design are described.
Fluorescent sensors for the detection of chemical warfare agents.
Burnworth, Mark; Rowan, Stuart J; Weder, Christoph
2007-01-01
Along with biological and nuclear threats, chemical warfare agents are some of the most feared weapons of mass destruction. Compared to nuclear weapons they are relatively easy to access and deploy, which makes them in some aspects a greater threat to national and global security. A particularly hazardous class of chemical warfare agents are the nerve agents. Their rapid and severe effects on human health originate in their ability to block the function of acetylcholinesterase, an enzyme that is vital to the central nervous system. This article outlines recent activities regarding the development of molecular sensors that can visualize the presence of nerve agents (and related pesticides) through changes of their fluorescence properties. Three different sensing principles are discussed: enzyme-based sensors, chemically reactive sensors, and supramolecular sensors. Typical examples are presented for each class and different fluorescent sensors for the detection of chemical warfare agents are summarized and compared.
All-optical recording and stimulation of retinal neurons in vivo in retinal degeneration mice
Strazzeri, Jennifer M.; Williams, David R.; Merigan, William H.
2018-01-01
Here we demonstrate the application of a method that could accelerate the development of novel therapies by allowing direct and repeatable visualization of cellular function in the living eye, to study loss of vision in animal models of retinal disease, as well as evaluate the time course of retinal function following therapeutic intervention. We use high-resolution adaptive optics scanning light ophthalmoscopy to image fluorescence from the calcium sensor GCaMP6s. In mice with photoreceptor degeneration (rd10), we measured restored visual responses in ganglion cell layer neurons expressing the red-shifted channelrhodopsin ChrimsonR over a six-week period following significant loss of visual responses. Combining a fluorescent calcium sensor, a channelrhodopsin, and adaptive optics enables all-optical stimulation and recording of retinal neurons in the living eye. Because the retina is an accessible portal to the central nervous system, our method also provides a novel non-invasive method of dissecting neuronal processing in the brain. PMID:29596518
Data Exploration using Unsupervised Feature Extraction for Mixed Micro-Seismic Signals
NASA Astrophysics Data System (ADS)
Meyer, Matthias; Weber, Samuel; Beutel, Jan
2017-04-01
We present a system for the analysis of data originating in a multi-sensor and multi-year experiment focusing on slope stability and its underlying processes in fractured permafrost rock walls undertaken at 3500m a.s.l. on the Matterhorn Hörnligrat, (Zermatt, Switzerland). This system incorporates facilities for the transmission, management and storage of large-scales of data ( 7 GB/day), preprocessing and aggregation of multiple sensor types, machine-learning based automatic feature extraction for micro-seismic and acoustic emission data and interactive web-based visualization of the data. Specifically, a combination of three types of sensors are used to profile the frequency spectrum from 1 Hz to 80 kHz with the goal to identify the relevant destructive processes (e.g. micro-cracking and fracture propagation) leading to the eventual destabilization of large rock masses. The sensors installed for this profiling experiment (2 geophones, 1 accelerometers and 2 piezo-electric sensors for detecting acoustic emission), are further augmented with sensors originating from a previous activity focusing on long-term monitoring of temperature evolution and rock kinematics with the help of wireless sensor networks (crackmeters, cameras, weather station, rock temperature profiles, differential GPS) [Hasler2012]. In raw format, the data generated by the different types of sensors, specifically the micro-seismic and acoustic emission sensors, is strongly heterogeneous, in part unsynchronized and the storage and processing demand is large. Therefore, a purpose-built signal preprocessing and event-detection system is used. While the analysis of data from each individual sensor follows established methods, the application of all these sensor types in combination within a field experiment is unique. Furthermore, experience and methods from using such sensors in laboratory settings cannot be readily transferred to the mountain field site setting with its scale and full exposure to the natural environment. Consequently, many state-of-the-art algorithms for big data analysis and event classification requiring a ground truth dataset cannot be applied. The above mentioned challenges require a tool for data exploration. In the presented system, data exploration is supported by unsupervised feature learning based on convolutional neural networks, which is used to automatically extract common features for preliminary clustering and outlier detection. With this information, an interactive web-tool allows for a fast identification of interesting time segments on which segment-selective algorithms for visualization, feature extraction and statistics can be applied. The combination of manual labeling based and unsupervised feature extraction provides an event catalog for classification of different characteristic events related to internal progression of micro-crack in steep fractured bedrock permafrost. References Hasler, A., S. Gruber, and J. Beutel (2012), Kinematics of steep bedrock permafrost, J. Geophys. Res., 117, F01016, doi:10.1029/2011JF001981.
Muscle Strength Endurance Testing Development Based Photo Transistor with Motion Sensor Ultrasonic
NASA Astrophysics Data System (ADS)
Rusdiana, A.
2017-03-01
The endurance of upper-body muscles is one of the most important physical fitness components. As technology develops, the process of test and assessment is now getting digital; for instance, there are a sensor stuck to the shoe (Foot Pod, Polar, and Sunto), Global Positioning System (GPS) and Differential Global Positioning System (DGPS), radar, photo finish, kinematic analysis, and photocells. Those devices aim to analyze the performances and fitness of athletes particularly the endurance of arm, chest, and shoulder muscles. In relation to that, this study attempt to create a software and a hardware for pull-ups through phototransistor with ultrasonic motion sensor. Components needed to develop this device consist of microcontroller MCS-51, photo transistor, light emitting diode, buzzer, ultrasonic sensor, and infrared sensor. The infrared sensor is put under the buffer while the ultrasonic sensor is stuck on the upper pole. The components are integrated with an LED or a laptop made using Visual Basic 12 software. The results show that pull-ups test using digital device (mean; 9.4 rep) is lower than using manual calculation (mean; 11.3 rep). This is due to the fact that digital test requires the test-takers to do pull-ups perfectly.
The tsunami service bus, an integration platform for heterogeneous sensor systems
NASA Astrophysics Data System (ADS)
Haener, R.; Waechter, J.; Kriegel, U.; Fleischer, J.; Mueller, S.
2009-04-01
1. INTRODUCTION Early warning systems are long living and evolving: New sensor-systems and -types may be developed and deployed, sensors will be replaced or redeployed on other locations and the functionality of analyzing software will be improved. To ensure a continuous operability of those systems their architecture must be evolution-enabled. From a computer science point of view an evolution-enabled architecture must fulfill following criteria: • Encapsulation of and functionality on data in standardized services. Access to proprietary sensor data is only possible via these services. • Loose coupling of system constituents which easily can be achieved by implementing standardized interfaces. • Location transparency of services what means that services can be provided everywhere. • Separation of concerns that means breaking a system into distinct features which overlap in functionality as little as possible. A Service Oriented Architecture (SOA) as e. g. realized in the German Indonesian Tsunami Early Warning System (GITEWS) and the advantages of functional integration on the basis of services described below adopt these criteria best. 2. SENSOR INTEGRATION Integration of data from (distributed) data sources is just a standard task in computer science. From few well known solution patterns, taking into account performance and security requirements of early warning systems only functional integration should be considered. Precondition for this is that systems are realized compliant to SOA patterns. Functionality is realized in form of dedicated components communicating via a service infrastructure. These components provide their functionality in form of services via standardized and published interfaces which could be used to access data maintained in - and functionality provided by dedicated components. Functional integration replaces the tight coupling at data level by a dependency on loosely coupled services. If the interfaces of the service providing components remain unchanged, components can be maintained and evolved independently on each other and service functionality as a whole can be reused. In GITEWS the functional integration pattern was adopted by applying the principles of an Enterprise Service Bus (ESB) as a backbone. Four services provided by the so called Tsunami Service Bus (TSB) which are essential for early warning systems are realized compliant to services specified within the Sensor Web Enablement (SWE) initiative of the Open Geospatial Consortium (OGC). 3. ARCHITECTURE The integration platform was developed to access proprietary, heterogeneous sensor data and to provide them in a uniform manner for further use. Its core, the TSB provides both a messaging-backbone and -interfaces on the basis of a Java Messaging Service (JMS). The logical architecture of GITEWS consists of four independent layers: • A resource layer where physical or virtual sensors as well as data or model storages provide relevant measurement-, event- and analysis-data: Utilizable for the TSB are any kind of data. In addition to sensors databases, model data and processing applications are adopted. SWE specifies encoding both to access and to describe these data in a comprehensive way: 1. Sensor Model Language (SensorML): Standardized description of sensors and sensor data 2. Observations and Measurements (O&M): Model and encoding of sensor measurements • A service layer to collect and conduct data from heterogeneous and proprietary resources and provide them via standardized interfaces: The TSB enables interaction with sensors via the following services: 1. Sensor Observation Service (SOS): Standardized access to sensor data 2. Sensor Planning Service (SPS): Controlling of sensors and sensor networks 3. Sensor Alert Service (SAS): Active sending of data if defined events occur 4. Web Notification Service (WNS): Conduction of asynchronous dialogues between services • An orchestration layer where atomic services are composed and arranged to high level processes like a decision support process: One of the outstanding features of service-oriented architectures is the possibility to compose new services from existing ones, which can be done programmatically or via declaration (workflow or process design). This allows e. g. the definition of new warning processes which could be adapted easily to new requirements. • An access layer which may contain graphical user interfaces for decision support, monitoring- or visualization-systems: To for example visualize time series graphical user interfaces request sensor data simply via the SOS. 4.BENEFIT The integration platform is realized on top of well known and widely used open source software implementing industrial standards. New sensors could be added easily to the infrastructure. Client components don't need to be adjusted if new sensor-types or -individuals are added to the system, because they access the sensors via standardized services. With implementing SWE fully compatible to the OGC specification it is possible to establish the "detection" and integration of sensors via the Web. Thus realizing a system of systems that combines early warning system functionality at different levels of detail (distant early warning systems, monitoring systems and any sensor system) is feasible.
In-situ characterization of wildland fire behavior
Bret Butler; D. Jimenez; J. Forthofer; Paul Sopko; K. Shannon; Jim Reardon
2010-01-01
A system consisting of two enclosures has been developed to characterize wildand fire behavior: The first enclosure is a sensor/data logger combination that measures and records convective/radiant energy released by the fire. The second is a digital video camera housed in a fire proof enclosure that records visual images of fire behavior. Together this system provides...
Vision-based sensing for autonomous in-flight refueling
NASA Astrophysics Data System (ADS)
Scott, D.; Toal, M.; Dale, J.
2007-04-01
A significant capability of unmanned airborne vehicles (UAV's) is that they can operate tirelessly and at maximum efficiency in comparison to their human pilot counterparts. However a major limiting factor preventing ultra-long endurance missions is that they require landing to refuel. Development effort has been directed to allow UAV's to automatically refuel in the air using current refueling systems and procedures. The 'hose & drogue' refueling system was targeted as it is considered the more difficult case. Recent flight trials resulted in the first-ever fully autonomous airborne refueling operation. Development has gone into precision GPS-based navigation sensors to maneuver the aircraft into the station-keeping position and onwards to dock with the refueling drogue. However in the terminal phases of docking, the accuracy of the GPS is operating at its performance limit and also disturbance factors on the flexible hose and basket are not predictable using an open-loop model. Hence there is significant uncertainty on the position of the refueling drogue relative to the aircraft, and is insufficient in practical operation to achieve a successful and safe docking. A solution is to augment the GPS based system with a vision-based sensor component through the terminal phase to visually acquire and track the drogue in 3D space. The higher bandwidth and resolution of camera sensors gives significantly better estimates on the state of the drogue position. Disturbances in the actual drogue position caused by subtle aircraft maneuvers and wind gusting can be visually tracked and compensated for, providing an accurate estimate. This paper discusses the issues involved in visually detecting a refueling drogue, selecting an optimum camera viewpoint, and acquiring and tracking the drogue throughout a widely varying operating range and conditions.
1992-01-01
results in stimulation of spatial-motion-location visual processes, which are known to take precedence over any other sensor or cognitive stimuli. In...or version he is flying. This was initially an observation that stimulated the birth of the human-factors engineering discipline during World War H...collisions with the surface, the pilot needs inputs to sensory channels other than the focal visual system. Properly designed auditory and
Multisensory architectures for action-oriented perception
NASA Astrophysics Data System (ADS)
Alba, L.; Arena, P.; De Fiore, S.; Listán, J.; Patané, L.; Salem, A.; Scordino, G.; Webb, B.
2007-05-01
In order to solve the navigation problem of a mobile robot in an unstructured environment a versatile sensory system and efficient locomotion control algorithms are necessary. In this paper an innovative sensory system for action-oriented perception applied to a legged robot is presented. An important problem we address is how to utilize a large variety and number of sensors, while having systems that can operate in real time. Our solution is to use sensory systems that incorporate analog and parallel processing, inspired by biological systems, to reduce the required data exchange with the motor control layer. In particular, as concerns the visual system, we use the Eye-RIS v1.1 board made by Anafocus, which is based on a fully parallel mixed-signal array sensor-processor chip. The hearing sensor is inspired by the cricket hearing system and allows efficient localization of a specific sound source with a very simple analog circuit. Our robot utilizes additional sensors for touch, posture, load, distance, and heading, and thus requires customized and parallel processing for concurrent acquisition. Therefore a Field Programmable Gate Array (FPGA) based hardware was used to manage the multi-sensory acquisition and processing. This choice was made because FPGAs permit the implementation of customized digital logic blocks that can operate in parallel allowing the sensors to be driven simultaneously. With this approach the multi-sensory architecture proposed can achieve real time capabilities.
Remote surface inspection system
NASA Astrophysics Data System (ADS)
Hayati, S.; Balaram, J.; Seraji, H.; Kim, W. S.; Tso, K.; Prasad, V.
1993-02-01
This paper reports on an on-going research and development effort in remote surface inspection of space platforms such as the Space Station Freedom (SSF). It describes the space environment and identifies the types of damage for which to search. This paper provides an overview of the Remote Surface Inspection System that was developed to conduct proof-of-concept demonstrations and to perform experiments in a laboratory environment. Specifically, the paper describes three technology areas: (1) manipulator control for sensor placement; (2) automated non-contact inspection to detect and classify flaws; and (3) an operator interface to command the system interactively and receive raw or processed sensor data. Initial findings for the automated and human visual inspection tests are reported.
Remote surface inspection system
NASA Technical Reports Server (NTRS)
Hayati, S.; Balaram, J.; Seraji, H.; Kim, W. S.; Tso, K.; Prasad, V.
1993-01-01
This paper reports on an on-going research and development effort in remote surface inspection of space platforms such as the Space Station Freedom (SSF). It describes the space environment and identifies the types of damage for which to search. This paper provides an overview of the Remote Surface Inspection System that was developed to conduct proof-of-concept demonstrations and to perform experiments in a laboratory environment. Specifically, the paper describes three technology areas: (1) manipulator control for sensor placement; (2) automated non-contact inspection to detect and classify flaws; and (3) an operator interface to command the system interactively and receive raw or processed sensor data. Initial findings for the automated and human visual inspection tests are reported.
Civil infrastructure monitoring for IVHS using optical fiber sensors
NASA Astrophysics Data System (ADS)
de Vries, Marten J.; Arya, Vivek; Grinder, C. R.; Murphy, Kent A.; Claus, Richard O.
1995-01-01
8Early deployment of Intelligent Vehicle Highway Systems would necessitate the internal instrumentation of infrastructure for emergency preparedness. Existing quantitative analysis and visual analysis techniques are time consuming, cost prohibitive, and are often unreliable. Fiber optic sensors are rapidly replacing conventional instrumentation because of their small size, light weight, immunity to electromagnetic interference, and extremely high information carrying capability. In this paper research on novel optical fiber sensing techniques for health monitoring of civil infrastructure such as highways and bridges is reported. Design, fabrication, and implementation of fiber optic sensor configurations used for measurements of strain are discussed. Results from field tests conducted to demonstrate the effectiveness of fiber sensors at determining quantitative strain vector components near crack locations in bridges are presented. Emerging applications of fiber sensors for vehicle flow, vehicle speed, and weigh-in-motion measurements are also discussed.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.
Expanding the Detection of Traversable Area with RealSense for the Visually Impaired
Yang, Kailun; Wang, Kaiwei; Hu, Weijian; Bai, Jian
2016-01-01
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers. PMID:27879634
NASA Astrophysics Data System (ADS)
Han, Mengdi; Zhang, Xiao-Sheng; Sun, Xuming; Meng, Bo; Liu, Wen; Zhang, Haixia
2014-04-01
The triboelectric nanogenerator (TENG) is a promising device in energy harvesting and self-powered sensing. In this work, we demonstrate a magnetic-assisted TENG, utilizing the magnetic force for electric generation. Maximum power density of 541.1 mW/m2 is obtained at 16.67 MΩ for the triboelectric part, while the electromagnetic part can provide power density of 649.4 mW/m2 at 16 Ω. Through theoretical calculation and experimental measurement, linear relationship between the tilt angle and output voltage at large angles is observed. On this basis, a self-powered omnidirectional tilt sensor is realized by two magnetic-assisted TENGs, which can measure the magnitude and direction of the tilt angle at the same time. For visualized sensing of the tilt angle, a sensing system is established, which is portable, intuitive, and self-powered. This visualized system greatly simplifies the measure process, and promotes the development of self-powered systems.
NASA Astrophysics Data System (ADS)
Dalphond, James M.
In modern classrooms, scientific probes are often used in science labs to engage students in inquiry-based learning. Many of these probes will never leave the classroom, closing the door on real world experimentation that may engage students. Also, these tools do not encourage students to share data across classrooms or schools. To address these limitations, we have developed a web-based system for collecting, storing, and visualizing sensor data, as well as a hardware package to interface existing classroom probes. This system, The Internet System for Networked Sensor Experimentation (iSENSE), was created to address these limitations. Development of the system began in 2007 and has proceeded through four phases: proof-of-concept prototype, technology demonstration, initial classroom deployment, and classroom testing. User testing and feedback during these phases guided development of the system. This thesis includes lessons learned during development and evaluation of the system in the hands of teachers and students. We developed three evaluations of this practical use. The first evaluation involved working closely with teachers to encourage them to integrate activities using the iSENSE system into their existing curriculum. We were looking for strengths of the approach and ease of integration. Second, we developed three "Activity Labs," which teachers used as embedded assessments. In these activities, students were asked to answer questions based on experiments or visualizations already entered into the iSENSE website. Lastly, teachers were interviewed after using the system to determine what they found valuable. This thesis makes contributions in two areas. It shows how an iterative design process was used to develop a system used in a science classroom, and it presents an analysis of the educational impact of the system on teachers and students.
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
Computer retina that models the primate retina
NASA Astrophysics Data System (ADS)
Shah, Samir; Levine, Martin D.
1994-06-01
At the retinal level, the strategies utilized by biological visual systems allow them to outperform machine vision systems, serving to motivate the design of electronic or `smart' sensors based on similar principles. Design of such sensors in silicon first requires a model of retinal information processing which captures the essential features exhibited by biological retinas. In this paper, a simple retinal model is presented, which qualitatively accounts for the achromatic information processing in the primate cone system. The model exhibits many of the properties found in biological retina such as data reduction through nonuniform sampling, adaptation to a large dynamic range of illumination levels, variation of visual acuity with illumination level, and enhancement of spatio temporal contrast information. The model is validated by replicating experiments commonly performed by electrophysiologists on biological retinas and comparing the response of the computer retina to data from experiments in monkeys. In addition, the response of the model to synthetic images is shown. The experiments demonstrate that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an `artificial retina.'
A user-friendly SSVEP-based brain-computer interface using a time-domain classifier.
Luo, An; Sullivan, Thomas J
2010-04-01
We introduce a user-friendly steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) system. Single-channel EEG is recorded using a low-noise dry electrode. Compared to traditional gel-based multi-sensor EEG systems, a dry sensor proves to be more convenient, comfortable and cost effective. A hardware system was built that displays four LED light panels flashing at different frequencies and synchronizes with EEG acquisition. The visual stimuli have been carefully designed such that potential risk to photosensitive people is minimized. We describe a novel stimulus-locked inter-trace correlation (SLIC) method for SSVEP classification using EEG time-locked to stimulus onsets. We studied how the performance of the algorithm is affected by different selection of parameters. Using the SLIC method, the average light detection rate is 75.8% with very low error rates (an 8.4% false positive rate and a 1.3% misclassification rate). Compared to a traditional frequency-domain-based method, the SLIC method is more robust (resulting in less annoyance to the users) and is also suitable for irregular stimulus patterns.
DOT National Transportation Integrated Search
2016-12-01
DRIVE Net is a region-wide, Web-based transportation decision support system that adopts digital roadway maps as : the base, and provides data layers for integrating and analyzing a variety of data sources (e.g., traffic sensors, incident : records)....
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-01-01
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714
A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.
Tayara, Hilal; Ham, Woonchul; Chong, Kil To
2016-12-15
This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.
NASA Astrophysics Data System (ADS)
Zheng, Li; Yi, Ruan
2009-11-01
Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-09-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs.
Sharmin, Moushumi; Raij, Andrew; Epstien, David; Nahum-Shani, Inbal; Beck, J. Gayle; Vhaduri, Sudip; Preston, Kenzie; Kumar, Santosh
2015-01-01
We investigate needs, challenges, and opportunities in visualizing time-series sensor data on stress to inform the design of just-in-time adaptive interventions (JITAIs). We identify seven key challenges: massive volume and variety of data, complexity in identifying stressors, scalability of space, multifaceted relationship between stress and time, a need for representation at multiple granularities, interperson variability, and limited understanding of JITAI design requirements due to its novelty. We propose four new visualizations based on one million minutes of sensor data (n=70). We evaluate our visualizations with stress researchers (n=6) to gain first insights into its usability and usefulness in JITAI design. Our results indicate that spatio-temporal visualizations help identify and explain between- and within-person variability in stress patterns and contextual visualizations enable decisions regarding the timing, content, and modality of intervention. Interestingly, a granular representation is considered informative but noise-prone; an abstract representation is the preferred starting point for designing JITAIs. PMID:26539566
NASA Astrophysics Data System (ADS)
Heavner, M. J.; Fatland, D. R.; Moeller, H.; Hood, E.; Schultz, M.
2007-12-01
The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). From power systems and instrumentation through data management, visualization, education, and public outreach, SEAMONSTER is designed with modularity in mind. We are utilizing virtual earth infrastructures to enhance both sensor web management and data access. We will describe how the design philosophy of using open, modular components contributes to the exploration of different virtual earth environments. We will also describe the sensor web physical implementation and how the many components have corresponding virtual earth representations. This presentation will provide an example of the integration of sensor webs into a virtual earth. We suggest that IPY sensor networks and sensor webs may integrate into virtual earth systems and provide an IPY legacy easily accessible to both scientists and the public. SEAMONSTER utilizes geobrowsers for education and public outreach, sensor web management, data dissemination, and enabling collaboration. We generate near-real-time auto-updating geobrowser files of the data. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers have made this project possible.
Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing
Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge
2011-01-01
This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739
Solid-State Multi-Sensor Array System for Real Time Imaging of Magnetic Fields and Ferrous Objects
NASA Astrophysics Data System (ADS)
Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.
2008-02-01
In this paper the development of a solid-state sensors based system for real-time imaging of magnetic fields and ferrous objects is described. The system comprises 1089 magneto inductive solid state sensors arranged in a 2D array matrix of 33×33 files and columns, equally spaced in order to cover an approximate area of 300 by 300 mm. The sensor array is located within a large current-carrying coil. Data is sampled from the sensors by several DSP controlling units and finally streamed to a host computer via a USB 2.0 interface and the image generated and displayed at a rate of 20 frames per minute. The development of the instrumentation has been complemented by extensive numerical modeling of field distribution patterns using boundary element methods. The system was originally intended for deployment in the non-destructive evaluation (NDE) of reinforced concrete. Nevertheless, the system is not only capable of producing real-time, live video images of the metal target embedded within any opaque medium, it also allows the real-time visualization and determination of the magnetic field distribution emitted by either permanent magnets or geometries carrying current. Although this system was initially developed for the NDE arena, it could also have many potential applications in many other fields, including medicine, security, manufacturing, quality assurance and design involving magnetic fields.
Millimeter-wave imaging sensor data evaluation
NASA Technical Reports Server (NTRS)
Wilson, William J.; Ibbott, Anthony C.
1987-01-01
A passive 3-mm radiometer system with a mechanically scanned antenna was built for use on a small aircraft or an Unmanned Aerial Vehicle to produce real near-real-time, moderate-resolution (0.5) images of the ground. One of the main advantages of this passive imaging sensor is that it is able to provide surveillance information through dust, smoke, fog and clouds when visual and IR systems are unusable. It can also be used for a variety of remote sensing applications, such as measurements of surface moisture, surface temperature, vegetation extent and snow cover. It is also possible to detect reflective objects under vegetation cover.
Beyond the cockpit: The visual world as a flight instrument
NASA Technical Reports Server (NTRS)
Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.
1992-01-01
The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).
Dynamic reweighting of three modalities for sensor fusion.
Hwang, Sungjae; Agada, Peter; Kiemel, Tim; Jeka, John J
2014-01-01
We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ± 1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a "fixed" reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion.
Hybrid wireless sensor network for rescue site monitoring after earthquake
NASA Astrophysics Data System (ADS)
Wang, Rui; Wang, Shuo; Tang, Chong; Zhao, Xiaoguang; Hu, Weijian; Tan, Min; Gao, Bowei
2016-07-01
This paper addresses the design of a low-cost, low-complexity, and rapidly deployable wireless sensor network (WSN) for rescue site monitoring after earthquakes. The system structure of the hybrid WSN is described. Specifically, the proposed hybrid WSN consists of two kinds of wireless nodes, i.e., the monitor node and the sensor node. Then the mechanism and the system configuration of the wireless nodes are detailed. A transmission control protocol (TCP)-based request-response scheme is proposed to allow several monitor nodes to communicate with the monitoring center. UDP-based image transmission algorithms with fast recovery have been developed to meet the requirements of in-time delivery of on-site monitor images. In addition, the monitor node contains a ZigBee module that used to communicate with the sensor nodes, which are designed with small dimensions to monitor the environment by sensing different physical properties in narrow spaces. By building a WSN using these wireless nodes, the monitoring center can display real-time monitor images of the monitoring area and visualize all collected sensor data on geographic information systems. In the end, field experiments were performed at the Training Base of Emergency Seismic Rescue Troops of China and the experimental results demonstrate the feasibility and effectiveness of the monitor system.
Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results
NASA Technical Reports Server (NTRS)
Aragon, Cecilia R.; Long, Kurtis R.
2005-01-01
Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.
NASA Technical Reports Server (NTRS)
Poulton, C. E.
1975-01-01
Comparative statistics were presented on the capability of LANDSAT-1 and three of the Skylab remote sensing systems (S-190A, S-190B, S-192) for the recognition and inventory of analogous natural vegetations and landscape features important in resource allocation and management. Two analogous regions presenting vegetational zonation from salt desert to alpine conditions above the timberline were observed, emphasizing the visual interpretation mode in the investigation. An hierarchical legend system was used as the basic classification of all land surface features. Comparative tests were run on image identifiability with the different sensor systems, and mapping and interpretation tests were made both in monocular and stereo interpretation with all systems except the S-192. Significant advantage was found in the use of stereo from space when image analysis is by visual or visual-machine-aided interactive systems. Some cost factors in mapping from space are identified. The various image types are compared and an operational system is postulated.
Assessment of a visually guided autonomous exploration robot
NASA Astrophysics Data System (ADS)
Harris, C.; Evans, R.; Tidey, E.
2008-10-01
A system has been developed to enable a robot vehicle to autonomously explore and map an indoor environment using only visual sensors. The vehicle is equipped with a single camera, whose output is wirelessly transmitted to an off-board standard PC for processing. Visual features within the camera imagery are extracted and tracked, and their 3D positions are calculated using a Structure from Motion algorithm. As the vehicle travels, obstacles in its surroundings are identified and a map of the explored region is generated. This paper discusses suitable criteria for assessing the performance of the system by computer-based simulation and practical experiments with a real vehicle. Performance measures identified include the positional accuracy of the 3D map and the vehicle's location, the efficiency and completeness of the exploration and the system reliability. Selected results are presented and the effect of key system parameters and algorithms on performance is assessed. This work was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.
A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks
Costa, Daniel G.; Guedes, Luiz Affonso
2011-01-01
Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908
50 CFR 218.125 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2011 CFR
2011-10-01
... observers maintained visual contact with marine mammal(s); (H) Wave height (ft); (I) Visibility; (J) Sonar..., low, and average during exercise); and (I) Narrative description of sensors and platforms utilized for...) Calves observed (y/n); (E) Initial detection sensor; (F) Length of time observers maintained visual...
Mafrica, Stefano; Servel, Alain; Ruffier, Franck
2016-11-10
Here we present a novel bio-inspired optic flow (OF) sensor and its application to visual guidance and odometry on a low-cost car-like robot called BioCarBot. The minimalistic OF sensor was robust to high-dynamic-range lighting conditions and to various visual patterns encountered thanks to its M 2 APIX auto-adaptive pixels and the new cross-correlation OF algorithm implemented. The low-cost car-like robot estimated its velocity and steering angle, and therefore its position and orientation, via an extended Kalman filter (EKF) using only two downward-facing OF sensors and the Ackerman steering model. Indoor and outdoor experiments were carried out in which the robot was driven in the closed-loop mode based on the velocity and steering angle estimates. The experimental results obtained show that our novel OF sensor can deliver high-frequency measurements ([Formula: see text]) in a wide OF range (1.5-[Formula: see text]) and in a 7-decade high-dynamic light level range. The OF resolution was constant and could be adjusted as required (up to [Formula: see text]), and the OF precision obtained was relatively high (standard deviation of [Formula: see text] with an average OF of [Formula: see text], under the most demanding lighting conditions). An EKF-based algorithm gave the robot's position and orientation with a relatively high accuracy (maximum errors outdoors at a very low light level: [Formula: see text] and [Formula: see text] over about [Formula: see text] and [Formula: see text]) despite the low-resolution control systems of the steering servo and the DC motor, as well as a simplified model identification and calibration. Finally, the minimalistic OF-based odometry results were compared to those obtained using measurements based on an inertial measurement unit (IMU) and a motor's speed sensor.
A navigation system for the visually impaired using colored navigation lines and RFID tags.
Seto, First Tatsuya
2009-01-01
In this paper, we describe about a developed navigation system that supports the independent walking of the visually impaired in the indoor space. Our developed instrument consists of a navigation system and a map information system. These systems are installed on a white cane. Our navigation system can follow a colored navigation line that is set on the floor. In this system, a color sensor installed on the tip of a white cane senses the colored navigation line, and the system informs the visually impaired that he/she is walking along the navigation line by vibration. The color recognition system is controlled by a one-chip microprocessor and this system can discriminate 6 colored navigation lines. RFID tags and a receiver for these tags are used in the map information system. The RFID tags and the RFID tag receiver are also installed on a white cane. The receiver receives tag information and notifies map information to the user by mp3 formatted pre-recorded voice. Three normal subjects who were blindfolded with an eye mask were tested with this system. All of them were able to walk along the navigation line. The performance of the map information system was good. Therefore, our system will be extremely valuable in supporting the activities of the visually impaired.
NASA Astrophysics Data System (ADS)
Welch, Sharon S.
Topics discussed in this volume include aircraft guidance and navigation, optics for visual guidance of aircraft, spacecraft and missile guidance and navigation, lidar and ladar systems, microdevices, gyroscopes, cockpit displays, and automotive displays. Papers are presented on optical processing for range and attitude determination, aircraft collision avoidance using a statistical decision theory, a scanning laser aircraft surveillance system for carrier flight operations, star sensor simulation for astroinertial guidance and navigation, autonomous millimeter-wave radar guidance systems, and a 1.32-micron long-range solid state imaging ladar. Attention is also given to a microfabricated magnetometer using Young's modulus changes in magnetoelastic materials, an integrated microgyroscope, a pulsed diode ring laser gyroscope, self-scanned polysilicon active-matrix liquid-crystal displays, the history and development of coated contrast enhancement filters for cockpit displays, and the effect of the display configuration on the attentional sampling performance. (For individual items see A93-28152 to A93-28176, A93-28178 to A93-28180)
Development of a Personal Integrated Environmental Monitoring System
Wong, Man Sing; Yip, Tsan Pong; Mok, Esmond
2014-01-01
Environmental pollution in the urban areas of Hong Kong has become a serious public issue but most urban inhabitants have no means of judging their own living environment in terms of dangerous threshold and overall livability. Currently there exist many low-cost sensors such as ultra-violet, temperature and air quality sensors that provide reasonably accurate data quality. In this paper, the development and evaluation of Integrated Environmental Monitoring System (IEMS) are illustrated. This system consists of three components: (i) position determination and sensor data collection for real-time geospatial-based environmental monitoring; (ii) on-site data communication and visualization with the aid of an Android-based application; and (iii) data analysis on a web server. This system has shown to be working well during field tests in a bus journey and a construction site. It provides an effective service platform for collecting environmental data in near real-time, and raises the public awareness of environmental quality in micro-environments. PMID:25420154
Real-time contaminant sensing and control in civil infrastructure systems
NASA Astrophysics Data System (ADS)
Rimer, Sara; Katopodes, Nikolaos
2014-11-01
A laboratory-scale prototype has been designed and implemented to test the feasibility of real-time contaminant sensing and control in civil infrastructure systems. A blower wind tunnel is the basis of the prototype design, with propylene glycol smoke as the ``contaminant.'' A camera sensor and compressed-air vacuum nozzle system is set up at the test section portion of the prototype to visually sense and then control the contaminant; a real-time controller is programmed to read in data from the camera sensor and administer pressure to regulators controlling the compressed air operating the vacuum nozzles. A computational fluid dynamics model is being integrated in with this prototype to inform the correct pressure to supply to the regulators in order to optimally control the contaminant's removal from the prototype. The performance of the prototype has been evaluated against the computational fluid dynamics model and is discussed in this presentation. Furthermore, the initial performance of the sensor-control system implemented in the test section of the prototype is discussed. NSF-CMMI 0856438.
NASA Astrophysics Data System (ADS)
Na, Jeong K.; Kuhr, Samuel J.; Jata, Kumar V.
2008-03-01
Thermal Protection Systems (TPS) can be subjected to impact damage during flight and/or during ground maintenance and/or repair. AFRL/RXLP is developing a reliable and robust on-board sensing/monitoring capability for next generation thermal protection systems to detect and assess impact damage. This study was focused on two classes of metallic thermal protection tiles to determine threshold for impact damage and develop sensing capability of the impacts. Sensors made of PVDF piezoelectric film were employed and tested to evaluate the detectability of impact signals and assess the onset or threshold of impact damage. Testing was performed over a range of impact energy levels, where the sensors were adhered to the back of the specimens. The PVDF signal levels were analyzed and compared to assess damage, where digital microscopy, visual inspection, and white light interferometry were used for damage verification. Based on the impact test results, an assessment of the impact damage thresholds for each type of metallic TPS system was made.
Dioptric defocus maps across the visual field for different indoor environments.
García, Miguel García; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried
2018-01-01
One of the factors proposed to regulate the eye growth is the error signal derived from the defocus in the retina and actually, this might arise from defocus not only in the fovea but the whole visual field. Therefore, myopia could be better predicted by spatio-temporally mapping the 'environmental defocus' over the visual field. At present, no devices are available that could provide this information. A 'Kinect sensor v1' camera (Microsoft Corp.) and a portable eye tracker were used for developing a system for quantifying 'indoor defocus error signals' across the central 58° of the visual field. Dioptric differences relative to the fovea (assumed to be in focus) were recorded over the visual field and 'defocus maps' were generated for various scenes and tasks.
A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.
Song, Yu; Nuske, Stephen; Scherer, Sebastian
2016-12-22
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.
Aguiar, Paulo; Mendonça, Luís; Galhardo, Vasco
2007-10-15
Operant animal behavioral tests require the interaction of the subject with sensors and actuators distributed in the experimental environment of the arena. In order to provide user independent reliable results and versatile control of these devices it is vital to use an automated control system. Commercial systems for control of animal mazes are usually based in software implementations that restrict their application to the proprietary hardware of the vendor. In this paper we present OpenControl: an opensource Visual Basic software that permits a Windows-based computer to function as a system to run fully automated behavioral experiments. OpenControl integrates video-tracking of the animal, definition of zones from the video signal for real-time assignment of animal position in the maze, control of the maze actuators from either hardware sensors or from the online video tracking, and recording of experimental data. Bidirectional communication with the maze hardware is achieved through the parallel-port interface, without the need for expensive AD-DA cards, while video tracking is attained using an inexpensive Firewire digital camera. OpenControl Visual Basic code is structurally general and versatile allowing it to be easily modified or extended to fulfill specific experimental protocols and custom hardware configurations. The Visual Basic environment was chosen in order to allow experimenters to easily adapt the code and expand it at their own needs.
Seamless positioning and navigation by using geo-referenced images and multi-sensor data.
Li, Xun; Wang, Jinling; Li, Tao
2013-07-12
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.
Low-complex energy-aware image communication in visual sensor networks
NASA Astrophysics Data System (ADS)
Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran
2013-10-01
A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.
Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data
Li, Xun; Wang, Jinling; Li, Tao
2013-01-01
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments. PMID:23857267
Towards a social and context-aware multi-sensor fall detection and risk assessment platform.
De Backere, F; Ongenae, F; Van den Abeele, F; Nelis, J; Bonte, P; Clement, E; Philpott, M; Hoebeke, J; Verstichel, S; Ackaert, A; De Turck, F
2015-09-01
For elderly people fall incidents are life-changing events that lead to degradation or even loss of autonomy. Current fall detection systems are not integrated and often associated with undetected falls and/or false alarms. In this paper, a social- and context-aware multi-sensor platform is presented, which integrates information gathered by a plethora of fall detection systems and sensors at the home of the elderly, by using a cloud-based solution, making use of an ontology. Within the ontology, both static and dynamic information is captured to model the situation of a specific patient and his/her (in)formal caregivers. This integrated contextual information allows to automatically and continuously assess the fall risk of the elderly, to more accurately detect falls and identify false alarms and to automatically notify the appropriate caregiver, e.g., based on location or their current task. The main advantage of the proposed platform is that multiple fall detection systems and sensors can be integrated, as they can be easily plugged in, this can be done based on the specific needs of the patient. The combination of several systems and sensors leads to a more reliable system, with better accuracy. The proof of concept was tested with the use of the visualizer, which enables a better way to analyze the data flow within the back-end and with the use of the portable testbed, which is equipped with several different sensors. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Yi; Jiang, Huaiguang; Zhang, Yingchen
In this paper, a big data visualization platform is designed to discover the hidden useful knowledge for smart grid (SG) operation, control and situation awareness. The spawn of smart sensors at both grid side and customer side can provide large volume of heterogeneous data that collect information in all time spectrums. Extracting useful knowledge from this big-data poll is still challenging. In this paper, the Apache Spark, an open source cluster computing framework, is used to process the big-data to effectively discover the hidden knowledge. A high-speed communication architecture utilizing the Open System Interconnection (OSI) model is designed to transmitmore » the data to a visualization platform. This visualization platform uses Google Earth, a global geographic information system (GIS) to link the geological information with the SG knowledge and visualize the information in user defined fashion. The University of Denver's campus grid is used as a SG test bench and several demonstrations are presented for the proposed platform.« less
Direct Visualization of Mechanical Beats by Means of an Oscillating Smartphone
NASA Astrophysics Data System (ADS)
Giménez, Marcos H.; Salinas, Isabel; Monsoriu, Juan A.; Castro-Palacio, Juan C.
2017-10-01
The resonance phenomenon is widely known in physics courses. Qualitatively speaking, resonance takes place in a driven oscillating system whenever the frequency approaches the natural frequency, resulting in maximal oscillatory amplitude. Very closely related to resonance is the phenomenon of mechanical beating, which occurs when the driving and natural frequencies of the system are slightly different. The frequency of the beat is just the difference of the natural and driving frequencies. Beats are very familiar in acoustic systems. There are several works in this journal on visualizing the beats in acoustic systems. For instance, the microphone and the speaker of two mobile devices were used in previous work to analyze the acoustic beats produced by two signals of close frequencies. The formation of beats can also be visualized in mechanical systems, such as a mass-spring system or a double-driven string. Here, the mechanical beats in a smartphone-spring system are directly visualized in a simple way. The frequency of the beats is measured by means of the acceleration sensor of a smartphone, which hangs from a spring attached to a mechanical driver. This laboratory experiment is suitable for both high school and first-year university physics courses.
Soldier-worn augmented reality system for tactical icon visualization
NASA Astrophysics Data System (ADS)
Roberts, David; Menozzi, Alberico; Clipp, Brian; Russler, Patrick; Cook, James; Karl, Robert; Wenger, Eric; Church, William; Mauger, Jennifer; Volpe, Chris; Argenta, Chris; Wille, Mark; Snarski, Stephen; Sherrill, Todd; Lupo, Jasper; Hobson, Ross; Frahm, Jan-Michael; Heinly, Jared
2012-06-01
This paper describes the development and demonstration of a soldier-worn augmented reality system testbed that provides intuitive 'heads-up' visualization of tactically-relevant geo-registered icons. Our system combines a robust soldier pose estimation capability with a helmet mounted see-through display to accurately overlay geo-registered iconography (i.e., navigation waypoints, blue forces, aircraft) on the soldier's view of reality. Applied Research Associates (ARA), in partnership with BAE Systems and the University of North Carolina - Chapel Hill (UNC-CH), has developed this testbed system in Phase 2 of the DARPA ULTRA-Vis (Urban Leader Tactical, Response, Awareness, and Visualization) program. The ULTRA-Vis testbed system functions in unprepared outdoor environments and is robust to numerous magnetic disturbances. We achieve accurate and robust pose estimation through fusion of inertial, magnetic, GPS, and computer vision data acquired from helmet kit sensors. Icons are rendered on a high-brightness, 40°×30° field of view see-through display. The system incorporates an information management engine to convert CoT (Cursor-on-Target) external data feeds into mil-standard icons for visualization. The user interface provides intuitive information display to support soldier navigation and situational awareness of mission-critical tactical information.
2001-04-30
APPROACH - Reduce cooling system weight and power thru miniaturization of its compressor, heat exchangers , and other components; and thru highly...research, but a visualized concept provides direction – Microelectromechanical Systems – Nanotech based materials – Fused sensor displays – MCC microtubes ...and Spine impact protection • Anti-Fog Face shield • Flame/ Heat resistance • Compatible with Body Cooling System • Technology Transition to Public
NASA Technical Reports Server (NTRS)
1979-01-01
A sensor system for the direct detection of extrasolar planets from an Earth orbit is evaluated: a spinning, infrared interferometer (IRIS). It is shuttle deployed, free flying, requires no on-orbit assembly and no reservicing over a design life of five years. The sensor concept and the mission objectives are reviewed, and the performance characteristics of a baseline sensor for standard observation conditions are derived. A baseline sensor design is given and the enabling technology discussed. Cost and weight estimates are performed; and a schedule for an IRIS program including technology development and assessment of risk are given. Finally, the sensor is compared with the apodized visual telescope sensor (APOTS) proposed for the same mission. The major conclusions are: that with moderate to strong technology advances, particularly in the fields of long life cryogenics, dynamical control, mirror manufacturing, and optical alignment, the detection of a Jupiter like planet around a Sunlike star at a distance of 30 light years is feasible, with a 3 meter aperture and an observation time of 1 hour. By contrast, major and possibly unlikely breakthroughs in mirror technology are required for APOTS to match this performance.
Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II
NASA Astrophysics Data System (ADS)
Krejcar, Ondrej; Janckulik, Dalibor; Motalova, Leona; Kufel, Jan
The main area of interest of our project is to provide solution which can be used in different areas of health care and which will be available through PDAs (Personal Digital Assistants), web browsers or desktop clients. The realized system deals with an ECG sensor connected to mobile equipment, such as PDA/Embedded, based on Microsoft Windows Mobile operating system. The whole system is based on the architecture of .NET Compact Framework, and Microsoft SQL Server. Visualization possibilities of web interface and ECG data are also discussed and final suggestion is made to Microsoft Silverlight solution along with current screenshot representation of implemented solution. The project was successfully tested in real environment in cryogenic room (-136OC).
Star sensor/mapper with a self deployable, high-attenuation light shade for SAS-B
NASA Technical Reports Server (NTRS)
Schenkel, F. W.; Finkel, A.
1972-01-01
A star sensor/mapper to determine positional data for the small astronomy satellites was tested to detect stars of plus 4 visual magnitude. It utilizes two information channels with memory so that it can be used with a low-data-rate telemetry system. One channel yields star amplitude information; the other yields the time of star occurrence as the star passes across an N-slit reticle/photomultiplier detector system. Some of the features of the star sensor/mapper are its low weight of 6.5 pounds, low power consumption of 0.4 watt, bandwidth switching to match the satellite spin rate, optical equalization of sensitivity over the 5-by-10 deg field of view, and self-deployable sunshade. The attitude determination accuracy is 3 arc minutes. This is determined by such parameters as the reticle configuration, optical train, and telemetry readout. The optical and electronic design of the star sensor/mapper, its expansion capabilities, and its features are discussed.
Enhanced compressed sensing for visual target tracking in wireless visual sensor networks
NASA Astrophysics Data System (ADS)
Qiang, Guo
2017-11-01
Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.
50 CFR 216.175 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2011 CFR
2011-10-01
...., FFG, DDG, or CG). (G) Length of time observers maintained visual contact with marine mammal. (H) Wave... height in feet (high, low and average during exercise). (I) Narrative description of sensors and... sensor. (F) Length of time observers maintained visual contact with marine mammal. (G) Wave height. (H...
50 CFR 216.275 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., what type of surface vessel, i.e., FFG, DDG, or CG) (G) Length of time observers maintained visual... exercise) (I) Narrative description of sensors and platforms utilized for marine mammal detection and... calves were observed (E) Initial detection sensor (F) Length of time observers maintained visual contact...
a Web-Based Platform for Visualizing Spatiotemporal Dynamics of Big Taxi Data
NASA Astrophysics Data System (ADS)
Xiong, H.; Chen, L.; Gui, Z.
2017-09-01
With more and more vehicles equipped with Global Positioning System (GPS), access to large-scale taxi trajectory data has become increasingly easy. Taxis are valuable sensors and information associated with taxi trajectory can provide unprecedented insight into many aspects of city life. But analysing these data presents many challenges. Visualization of taxi data is an efficient way to represent its distributions and structures and reveal hidden patterns in the data. However, Most of the existing visualization systems have some shortcomings. On the one hand, the passenger loading status and speed information cannot be expressed. On the other hand, mono-visualization form limits the information presentation. In view of these problems, this paper designs and implements a visualization system in which we use colour and shape to indicate passenger loading status and speed information and integrate various forms of taxi visualization. The main work as follows: 1. Pre-processing and storing the taxi data into MongoDB database. 2. Visualization of hotspots for taxi pickup points. Through DBSCAN clustering algorithm, we cluster the extracted taxi passenger's pickup locations to produce passenger hotspots. 3. Visualizing the dynamic of taxi moving trajectory using interactive animation. We use a thinning algorithm to reduce the amount of data and design a preloading strategyto load the data smoothly. Colour and shape are used to visualize the taxi trajectory data.
2017-09-01
via visual sensors onboard the UAV. Both the hardware and software architecture design are discussed at length. Then, a series of tests that were...visual sensors onboard the UAV. Both the hardware and software architecture design are discussed at length. Then, a series of tests that were conducted...and representing the change in time . (1) Horn and Schunck (1981) further simplified this equation by taking the Taylor series
Method of interpretation of remotely sensed data and applications to land use
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dossantos, A. P.; Foresti, C.; Demoraesnovo, E. M. L.; Niero, M.; Lombardo, M. A.
1981-01-01
Instructional material describing a methodology of remote sensing data interpretation and examples of applicatons to land use survey are presented. The image interpretation elements are discussed for different types of sensor systems: aerial photographs, radar, and MSS/LANDSAT. Visual and automatic LANDSAT image interpretation is emphasized.
Acoustic monitoring system to quantify ingestive behavior of free-grazing cattle
USDA-ARS?s Scientific Manuscript database
Methods to estimate intake in grazing livestock include using markers, visual observation, mechanical sensors that respond to jaw movement and acoustic recording. In most of the acoustic monitoring studies, the microphone is inverted on the forehead of the grazing livestock and the skull is utilize...
Remote surface inspection system. [of large space platforms
NASA Technical Reports Server (NTRS)
Hayati, Samad; Balaram, J.; Seraji, Homayoun; Kim, Won S.; Tso, Kam S.
1993-01-01
This paper reports on an on-going research and development effort in remote surface inspection of space platforms such as the Space Station Freedom (SSF). It describes the space environment and identifies the types of damage for which to search. This paper provides an overview of the Remote Surface Inspection System that was developed to conduct proof-of-concept demonstrations and to perform experiments in a laboratory environment. Specifically, the paper describes three technology areas: (1) manipulator control for sensor placement; (2) automated non-contact inspection to detect and classify flaws; and (3) an operator interface to command the system interactively and receive raw or processed sensor data. Initial findings for the automated and human visual inspection tests are reported.
NASA Astrophysics Data System (ADS)
Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.
2002-10-01
In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.
Vibrotactile Feedbacks System for Assisting the Physically Impaired Persons for Easy Navigation
NASA Astrophysics Data System (ADS)
Safa, M.; Geetha, G.; Elakkiya, U.; Saranya, D.
2018-04-01
NAYAN architecture is for a visually impaired person to help for navigation. As well known, all visually impaired people desperately requires special requirements even to access services like the public transportation. This prototype system is a portable device; it is so easy to carry in any conduction to travel through a familiar and unfamiliar environment. The system consists of GPS receiver and it can get NEMA data through the satellite and it is provided to user's Smartphone through Arduino board. This application uses two vibrotactile feedbacks that will be placed in the left and right shoulder for vibration feedback, which gives information about the current location. The ultrasonic sensor is used for obstacle detection which is found in front of the visually impaired person. The Bluetooth modules connected with Arduino board is to send information to the user's mobile phone which it receives from GPS.
Kim, Kyukwang; Kim, Hyeong Keun; Lim, Hwijoon; Myung, Hyun
2016-01-01
In this research an open source, low power sensor node was developed to check the growth of mycobacteria in a culture bottle with a nitrate reductase assay method for a drug susceptibility test. The sensor system reports the temperature and color sensor output frequency change of the culture bottle when the device is triggered. After the culture process is finished, a nitrite ion detecting solution based on a commercial nitrite ion detection kit is injected into the culture bottle by a syringe pump to check bacterial growth by the formation of a pigment by the reaction between the solution and the color sensor. Sensor status and NRA results are broadcasted via a Bluetooth low energy beacon. An Android application was developed to collect the broadcasted data, classify the status of cultured samples from multiple devices, and visualize the data for the end users, circumventing the need to examine each culture bottle manually during a long culture period. The authors expect that usage of the developed sensor will decrease the cost and required labor for handling large amounts of patient samples in local health centers in developing countries. All 3D-printerable hardware parts, a circuit diagram, and software are available online. PMID:27338406
Active laser radar (lidar) for measurement of corresponding height and reflectance images
NASA Astrophysics Data System (ADS)
Froehlich, Christoph; Mettenleiter, M.; Haertl, F.
1997-08-01
For the survey and inspection of environmental objects, a non-tactile, robust and precise imaging of height and depth is the basis sensor technology. For visual inspection,surface classification, and documentation purposes, however, additional information concerning reflectance of measured objects is necessary. High-speed acquisition of both geometric and visual information is achieved by means of an active laser radar, supporting consistent 3D height and 2D reflectance images. The laser radar is an optical-wavelength system, and is comparable to devices built by ERIM, Odetics, and Perceptron, measuring the range between sensor and target surfaces as well as the reflectance of the target surface, which corresponds to the magnitude of the back scattered laser energy. In contrast to these range sensing devices, the laser radar under consideration is designed for high speed and precise operation in both indoor and outdoor environments, emitting a minimum of near-IR laser energy. It integrates a laser range measurement system and a mechanical deflection system for 3D environmental measurements. This paper reports on design details of the laser radar for surface inspection tasks. It outlines the performance requirements and introduces the measurement principle. The hardware design, including the main modules, such as the laser head, the high frequency unit, the laser beam deflection system, and the digital signal processing unit are discussed.the signal processing unit consists of dedicated signal processors for real-time sensor data preprocessing as well as a sensor computer for high-level image analysis and feature extraction. The paper focuses on performance data of the system, including noise, drift over time, precision, and accuracy with measurements. It discuses the influences of ambient light, surface material of the target, and ambient temperature for range accuracy and range precision. Furthermore, experimental results from inspection of buildings, monuments and industrial environments are presented. The paper concludes by summarizing results achieved in industrial environments and gives a short outlook to future work.
NASA Technical Reports Server (NTRS)
Teng, William; Berrick, Steve; Leptuokh, Gregory; Liu, Zhong; Rui, Hualan; Pham, Long; Shen, Suhung; Zhu, Tong
2004-01-01
The Goddard Space Flight Center Earth Sciences Data and Information Services Center (GES DISC) Distributed Active Center (DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM On-line Visualization and Analysis System precipitation and other satellite data products and services. AIS outputs will be ,integrated into existing operational decision support system for global crop monitoring, such as that of the U.N. World Food Program. The ability to use the raw data stored in the GES DAAC archives is highly dependent on having a detailed understanding of the data's internal structure and physical implementation. To gain this understanding is a time-consuming process and not a productive investment of the user's time. This is an especially difficult challenge when users need to deal with multi-sensor data that usually are of different structures and resolutions. The AIS has taken a major step towards meeting this challenge by incorporating an underlying infrastructure, called the GES-DISC Interactive Online Visualization and Analysis Infrastructure or "Giovanni," that integrates various components to support web interfaces that ,allow users to perform interactive analysis on-line without downloading any data. Several instances of the Giovanni-based interface have been or are being created to serve users of TRMM precipitation, MODIS aerosol, and SeaWiFS ocean color data, as well as agricultural applications users. Giovanni-based interfaces are simple to use but powerful. The user selects geophysical ,parameters, area of interest, and time period; and the system generates an output ,on screen in a matter of seconds.
The effect of multispectral image fusion enhancement on human efficiency.
Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M
2017-01-01
The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.
Data Fusion for a Vision-Radiological System for Source Tracking and Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev
2015-07-01
A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Distributed coding/decoding complexity in video sensor networks.
Cordeiro, Paulo J; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.
Sensor, signal, and image informatics - state of the art and current topics.
Lehmann, T M; Aach, T; Witte, H
2006-01-01
The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.
On computer vision in wireless sensor networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Ko, Teresa H.
Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an imagemore » capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.« less
Low-Cost Air Quality Monitoring Tools: From Research to Practice (A Workshop Summary)
Griswold, William G.; RS, Abhijit; Johnston, Jill E.; Herting, Megan M.; Thorson, Jacob; Collier-Oxandale, Ashley; Hannigan, Michael
2017-01-01
In May 2017, a two-day workshop was held in Los Angeles (California, U.S.A.) to gather practitioners who work with low-cost sensors used to make air quality measurements. The community of practice included individuals from academia, industry, non-profit groups, community-based organizations, and regulatory agencies. The group gathered to share knowledge developed from a variety of pilot projects in hopes of advancing the collective knowledge about how best to use low-cost air quality sensors. Panel discussion topics included: (1) best practices for deployment and calibration of low-cost sensor systems, (2) data standardization efforts and database design, (3) advances in sensor calibration, data management, and data analysis and visualization, and (4) lessons learned from research/community partnerships to encourage purposeful use of sensors and create change/action. Panel discussions summarized knowledge advances and project successes while also highlighting the questions, unresolved issues, and technological limitations that still remain within the low-cost air quality sensor arena. PMID:29143775
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
A System for Video Surveillance and Monitoring CMU VSAM Final Report
1999-11-30
motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
NASA Astrophysics Data System (ADS)
Crawford, Bobby Grant
In an effort to field smaller and cheaper Uninhabited Aerial Vehicles (UAVs), the Army has expressed an interest in an ability of the vehicle to autonomously detect and avoid obstacles. Current systems are not suitable for small aircraft. NASA Langley Research Center has developed a vision sensing system that uses small semiconductor cameras. The feasibility of using this sensor for the purpose of autonomous obstacle avoidance by a UAV is the focus of the research presented in this document. The vision sensor characteristics are modeled and incorporated into guidance and control algorithms designed to generate flight commands based on obstacle information received from the sensor. The system is evaluated by simulating the response to these flight commands using a six degree-of-freedom, non-linear simulation of a small, fixed wing UAV. The simulation is written using the MATLAB application and runs on a PC. Simulations were conducted to test the longitudinal and lateral capabilities of the flight control for a range of airspeeds, camera characteristics, and wind speeds. Results indicate that the control system is suitable for obstacle avoiding flight control using the simulated vision system. In addition, a method for designing and evaluating the performance of such a system has been developed that allows the user to easily change component characteristics and evaluate new systems through simulation.
Dioptric defocus maps across the visual field for different indoor environments
García, Miguel García; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried
2017-01-01
One of the factors proposed to regulate the eye growth is the error signal derived from the defocus in the retina and actually, this might arise from defocus not only in the fovea but the whole visual field. Therefore, myopia could be better predicted by spatio-temporally mapping the ‘environmental defocus’ over the visual field. At present, no devices are available that could provide this information. A ‘Kinect sensor v1’ camera (Microsoft Corp.) and a portable eye tracker were used for developing a system for quantifying ‘indoor defocus error signals’ across the central 58° of the visual field. Dioptric differences relative to the fovea (assumed to be in focus) were recorded over the visual field and ‘defocus maps’ were generated for various scenes and tasks. PMID:29359108
A synchronization method for wireless acquisition systems, application to brain computer interfaces.
Foerster, M; Bonnet, S; van Langhenhove, A; Porcherot, J; Charvet, G
2013-01-01
A synchronization method for wireless acquisition systems has been developed and implemented on a wireless ECoG recording implant and on a wireless EEG recording helmet. The presented algorithm and hardware implementation allow the precise synchronization of several data streams from several sensor nodes for applications where timing is critical like in event-related potential (ERP) studies. The proposed method has been successfully applied to obtain visual evoked potentials and compared with a reference biosignal amplifier. The control over the exact sampling frequency allows reducing synchronization errors that will otherwise accumulate during a recording. The method is scalable to several sensor nodes communicating with a shared base station.
Satellite Imagery Assisted Road-Based Visual Navigation System
NASA Astrophysics Data System (ADS)
Volkova, A.; Gibbens, P. W.
2016-06-01
There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used
Laboratory validation of MEMS-based sensors for post-earthquake damage assessment image
NASA Astrophysics Data System (ADS)
Pozzi, Matteo; Zonta, Daniele; Santana, Juan; Colin, Mikael; Saillen, Nicolas; Torfs, Tom; Amditis, Angelos; Bimpas, Matthaios; Stratakos, Yorgos; Ulieru, Dumitru; Bairaktaris, Dimitirs; Frondistou-Yannas, Stamatia; Kalidromitis, Vasilis
2011-04-01
The evaluation of seismic damage is today almost exclusively based on visual inspection, as building owners are generally reluctant to install permanent sensing systems, due to their high installation, management and maintenance costs. To overcome this limitation, the EU-funded MEMSCON project aims to produce small size sensing nodes for measurement of strain and acceleration, integrating Micro-Electro-Mechanical Systems (MEMS) based sensors and Radio Frequency Identification (RFID) tags in a single package that will be attached to reinforced concrete buildings. To reduce the impact of installation and management, data will be transmitted to a remote base station using a wireless interface. During the project, sensor prototypes were produced by assembling pre-existing components and by developing ex-novo miniature devices with ultra-low power consumption and sensing performance beyond that offered by sensors available on the market. The paper outlines the device operating principles, production scheme and working at both unit and network levels. It also reports on validation campaigns conducted in the laboratory to assess system performance. Accelerometer sensors were tested on a reduced scale metal frame mounted on a shaking table, back to back with reference devices, while strain sensors were embedded in both reduced and full-scale reinforced concrete specimens undergoing increasing deformation cycles up to extensive damage and collapse. The paper assesses the economical sustainability and performance of the sensors developed for the project and discusses their applicability to long-term seismic monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Brian E.; Oppel III, Fred J.
2017-01-25
This package contains modules that model a visual sensor in Umbra. It is typically used to represent eyesight of characters in Umbra. This library also includes the sensor property, seeable, and an Active Denial sensor.
Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul
2016-02-01
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
Implant for in-vivo parameter monitoring, processing and transmitting
Ericson, Milton N [Knoxville, TN; McKnight, Timothy E [Greenback, TN; Smith, Stephen F [London, TN; Hylton, James O [Clinton, TN
2009-11-24
The present invention relates to a completely implantable intracranial pressure monitor, which can couple to existing fluid shunting systems as well as other internal monitoring probes. The implant sensor produces an analog data signal which is then converted electronically to a digital pulse by generation of a spreading code signal and then transmitted to a location outside the patient by a radio-frequency transmitter to an external receiver. The implanted device can receive power from an internal source as well as an inductive external source. Remote control of the implant is also provided by a control receiver which passes commands from an external source to the implant system logic. Alarm parameters can be programmed into the device which are capable of producing an audible or visual alarm signal. The utility of the monitor can be greatly expanded by using multiple pressure sensors simultaneously or by combining sensors of various physiological types.
Implantable device for in-vivo intracranial and cerebrospinal fluid pressure monitoring
Ericson, Milton N.; McKnight, Timothy E.; Smith, Stephen F.; Hylton, James O.
2003-01-01
The present invention relates to a completely implantable intracranial pressure monitor, which can couple to existing fluid shunting systems as well as other internal monitoring probes. The implant sensor produces an analog data signal which is then converted electronically to a digital pulse by generation of a spreading code signal and then transmitted to a location outside the patient by a radio-frequency transmitter to an external receiver. The implanted device can receive power from an internal source as well as an inductive external source. Remote control of the implant is also provided by a control receiver which passes commands from an external source to the implant system logic. Alarm parameters can be programmed into the device which are capable of producing an audible or visual alarm signal. The utility of the monitor can be greatly expanded by using multiple pressure sensors simultaneously or by combining sensors of various physiological types.
Cuffless Blood Pressure Estimation Based on Data-Oriented Continuous Health Monitoring System
Kawanaka, Haruki; Oguri, Koji
2017-01-01
Measuring blood pressure continuously helps monitor health and also prevent lifestyle related diseases to extend the expectancy of healthy life. Blood pressure, which is nowadays used for monitoring patient, is one of the most useful indexes for prevention of lifestyle related diseases such as hypertension. However, continuously monitoring the blood pressure is unrealistic because of discomfort caused by the tightening of a cuff belt. We have earlier researched the data-oriented blood pressure estimation without using a cuff. Remarkably, our blood pressure estimation method only uses a photoplethysmograph sensor. Therefore, the application is flexible for sensor locations and measuring situations. In this paper, we describe the implementation of our estimation method, the launch of a cloud system which can collect and manage blood pressure data measured by a wristwatch-type photoplethysmograph sensor, and the construction of our applications to visualize life-log data including the time-series data of blood pressure. PMID:28523074
A Self-Assessment Stereo Capture Model Applicable to the Internet of Things
Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing
2015-01-01
The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004
Querying and Extracting Timeline Information from Road Traffic Sensor Data
Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen
2016-01-01
The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset. PMID:27563900
Open Source Based Sensor Platform for Mobile Environmental Monitoring and Data Acquisition
NASA Astrophysics Data System (ADS)
Schima, Robert; Goblirsch, Tobias; Misterek, René; Salbach, Christoph; Schlink, Uwe; Francyk, Bogdan; Dietrich, Peter; Bumberger, Jan
2016-04-01
The impact of global change, urbanization and complex interactions between humans and the environment show different effects on different scales. However, the desire to obtain a better understanding of ecosystems and process dynamics in nature accentuates the need for observing these processes in higher temporal and spatial resolutions. Especially with regard to the process dynamics and heterogeneity of urban areas, a comprehensive monitoring of these effects remains to be a challenging issue. Open source based electronics and cost-effective sensors are offering a promising approach to explore new possibilities of mobile data acquisition and innovative strategies and thereby support a comprehensive ad-hoc monitoring and the capturing of environmental processes close to real time. Accordingly, our project aims the development of new strategies for mobile data acquisition and real-time processing of user-specific environmental data, based on a holistic and integrated process. To this end, the concept of our monitoring system covers the data collection, data processing and data integration as well as the data provision within one infrastructure. This ensures a consistent data stream and a rapid data processing. However, the overarching goal is the provision of an integrated service instead of lengthy and arduous data acquisition by hand. Therefore, the system also serves as a data acquisition assistant and gives guidance during the measurements. In technical terms, our monitoring system consists of mobile sensor devices, which can be controlled and managed by a smart phone app (Android). At the moment, the system is able to acquire temperature and humidity in space (GPS) and time (real-time clock) as a built in function. In addition, larger system functionality can be accomplished by adding further sensors for the detection of e.g. fine dust, methane or dissolved organic compounds. From the IT point of view, the system includes a smart phone app and a web service for data processing, data provision and data visualization. The smart phone app allows the configuration of the mobile sensor devices and provides some built-in functions such as simple data visualization or data transmission via e-mail whereas the web service provides the visualization of the data and tools for data processing. In an initial field experiment, a methane monitoring based on our sensor integration platform was performed in the city area of Leipzig (Germany) in late June 2015. The study has shown that an urban monitoring can be conducted based on open source components. Moreover, the system enabled the detection of hot spots and methane emission sources. In September 2015, a larger scaled city monitoring based on the mobile monitoring platform was performed by five independently driving cyclists through the city center of Leipzig (Germany). As a result we were able to instantly show a heat and humidity map of the inner city center as well as an exposure map for each cyclist. This emphasizes the feasibility and high potential of open source based monitoring approaches for future research in the field of urban area monitoring in general, citizen science or the validation of remote sensing data.
NASA Astrophysics Data System (ADS)
Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.
2016-09-01
Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.
Mobile camera-space manipulation
NASA Technical Reports Server (NTRS)
Seelinger, Michael J. (Inventor); Yoder, John-David S. (Inventor); Skaar, Steven B. (Inventor)
2001-01-01
The invention is a method of using computer vision to control systems consisting of a combination of holonomic and nonholonomic degrees of freedom such as a wheeled rover equipped with a robotic arm, a forklift, and earth-moving equipment such as a backhoe or a front-loader. Using vision sensors mounted on the mobile system and the manipulator, the system establishes a relationship between the internal joint configuration of the holonomic degrees of freedom of the manipulator and the appearance of features on the manipulator in the reference frames of the vision sensors. Then, the system, perhaps with the assistance of an operator, identifies the locations of the target object in the reference frames of the vision sensors. Using this target information, along with the relationship described above, the system determines a suitable trajectory for the nonholonomic degrees of freedom of the base to follow towards the target object. The system also determines a suitable pose or series of poses for the holonomic degrees of freedom of the manipulator. With additional visual samples, the system automatically updates the trajectory and final pose of the manipulator so as to allow for greater precision in the overall final position of the system.
A CMOS high speed imaging system design based on FPGA
NASA Astrophysics Data System (ADS)
Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui
2015-10-01
CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.
Urodynamic catheter moisture sensor: A novel device to improve leak point pressure detection.
Marshall, Blake R; Arlen, Angela M; Kirsch, Andrew J
2016-06-01
High-quality urodynamic studies in patients with neurogenic lower urinary tract dysfunction are important, as UDS may be the only reliable gauge of potential risk for upper tract deterioration and the optimal tool to guide lower urinary tract management. Reliance on direct visualization of leakage during typical UDS remains a potential source of error. Given the necessity of accurate leak point pressures, we developed a wireless leak detection sensor to eliminate the need for visual inspection during UDS. A mean decrease in detrusor leak point pressure of 3 cm/H2 0 and a mean 11% decrease in capacity at leakage was observed when employing the sensor compared to visual inspection in children undergoing two fillings during a single UDS session. Removing the visual inspection component of UDS may improve accuracy of pressure readings. Neurourol. Urodynam. 35:647-648, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Longman, Peter J.; How, Thomas C.; Hudson, Craig; Clarkson, Geoffrey J. N.
2001-08-01
The Defence Evaluation Research Agency carried out an airborne demonstration and evaluation of a fast-jet Visually Coupled System (VCS) installed in ZD902, the Tornado Integrated Avionics Research Aircraft for the UK MOD. The installed VCS used a Head Steered Forward Looking Infra-Red (HSFLIR) sensor and a Head Tracking system to provide the pilot with an image of the outside world projected onto a Binocular Helmet Mounted Display. In addition to the sensor image, information such as aircraft altitude, attitude, and airspeed were also presented to the pilot through the HMD to eliminate the need to look inside the cockpit for critical flight data. The aim of the VIVIAN trial was to demonstrate by day and night the benefits of a fast-jet integrated HSFLIR and HMD as an aid to low level flight, navigation, target acquisition, take-off and landing. The outcome of this flight test program was very encouraging and, although testing has identified that improvements are necessary, in particular to HSFLIR image quality, Auto Gain Control performance, helmet fit and symbology design, test aircrew endorse the acceptability of a VCS.
Han, Mengdi; Zhang, Xiao-Sheng; Sun, Xuming; Meng, Bo; Liu, Wen; Zhang, Haixia
2014-01-01
The triboelectric nanogenerator (TENG) is a promising device in energy harvesting and self-powered sensing. In this work, we demonstrate a magnetic-assisted TENG, utilizing the magnetic force for electric generation. Maximum power density of 541.1 mW/m2 is obtained at 16.67 MΩ for the triboelectric part, while the electromagnetic part can provide power density of 649.4 mW/m2 at 16 Ω. Through theoretical calculation and experimental measurement, linear relationship between the tilt angle and output voltage at large angles is observed. On this basis, a self-powered omnidirectional tilt sensor is realized by two magnetic-assisted TENGs, which can measure the magnitude and direction of the tilt angle at the same time. For visualized sensing of the tilt angle, a sensing system is established, which is portable, intuitive, and self-powered. This visualized system greatly simplifies the measure process, and promotes the development of self-powered systems. PMID:24770490
NASA Astrophysics Data System (ADS)
Jing, Joseph C.; Chou, Lidek; Su, Erica; Wong, Brian J. F.; Chen, Zhongping
2016-12-01
The upper airway is a complex tissue structure that is prone to collapse. Current methods for studying airway obstruction are inadequate in safety, cost, or availability, such as CT or MRI, or only provide localized qualitative information such as flexible endoscopy. Long range optical coherence tomography (OCT) has been used to visualize the human airway in vivo, however the limited imaging range has prevented full delineation of the various shapes and sizes of the lumen. We present a new long range OCT system that integrates high speed imaging with a real-time position tracker to allow for the acquisition of an accurate 3D anatomical structure in vivo. The new system can achieve an imaging range of 30 mm at a frame rate of 200 Hz. The system is capable of generating a rapid and complete visualization and quantification of the airway, which can then be used in computational simulations to determine obstruction sites.
NASA Astrophysics Data System (ADS)
McIntosh, Benjamin Patrick
Blindness due to Age-Related Macular Degeneration and Retinitis Pigmentosa is unfortunately both widespread and largely incurable. Advances in visual prostheses that can restore functional vision in those afflicted by these diseases have evolved rapidly from new areas of research in ophthalmology and biomedical engineering. This thesis is focused on further advancing the state-of-the-art of both visual prostheses and implantable biomedical devices. A novel real-time system with a high performance head-mounted display is described that enables enhanced realistic simulation of intraocular retinal prostheses. A set of visual psychophysics experiments is presented using the visual prosthesis simulator that quantify, in several ways, the benefit of foveation afforded by an eye-pointed camera (such as an eye-tracked extraocular camera or an implantable intraocular camera) as compared with a head-pointed camera. A visual search experiment demonstrates a significant improvement in the time to locate a target on a screen when using an eye-pointed camera. A reach and grasp experiment demonstrates a 20% to 70% improvement in time to grasp an object when using an eye-pointed camera, with the improvement maximized when the percept is blurred. A navigation and mobility experiment shows a 10% faster walking speed and a 50% better ability to avoid obstacles when using an eye-pointed camera. Improvements to implantable biomedical devices are also described, including the design and testing of VLSI-integrable positive mobile ion contamination sensors and humidity sensors that can validate the hermeticity of biomedical device packages encapsulated by hermetic coatings, and can provide early warning of leaks or contamination that may jeopardize the implant. The positive mobile ion contamination sensors are shown to be sensitive to externally applied contamination. A model is proposed to describe sensitivity as a function of device geometry, and verified experimentally. Guidelines are provided on the use of spare CMOS oxide and metal layers to maximize the hermeticity of an implantable microchip. In addition, results are presented on the design and testing of small form factor, very low power, integrated CMOS clock generation circuits that are stable enough to drive commercial image sensor arrays, and therefore can be incorporated in an intraocular camera for retinal prostheses.
Analysis of sensor network observations during some simulated landslide experiments
NASA Astrophysics Data System (ADS)
Scaioni, M.; Lu, P.; Feng, T.; Chen, W.; Wu, H.; Qiao, G.; Liu, C.; Tong, X.; Li, R.
2012-12-01
A multi-sensor network was tested during some experiments on a landslide simulation platform established at Tongji University (Shanghai, P.R. China). Here landslides were triggered by means of artificial rainfall (see Figure 1). The sensor network currently incorporates contact sensors and two imaging systems. This represent a novel solution, because the spatial sensor network incorporate either contact sensors and remote sensors (video-cameras). In future, these sensors will be installed on two real ground slopes in Sichuan province (South-West China), where Wenchuan earthquake occurred in 2008. This earthquake caused the immediate activation of several landslide, while other area became unstable and still are a menace for people and properties. The platform incorporates the reconstructed scale slope, sensor network, communication system, database and visualization system. Some landslide simulation experiments allowed ascertaining which sensors could be more suitable to be deployed in Wenchuan area. The poster will focus on the analysis of results coming from down scale simulations. Here the different steps of the landslide evolution can be followed on the basis of sensor observations. This include underground sensors to detect the water table level and the pressure in the ground, a set of accelerometers and two inclinometers. In the first part of the analysis the full data series are investigated to look for correlations and common patterns, as well as to link them to the physical processes. In the second, 4 subsets of sensors located in neighbor positions are analyzed. The analysis of low- and high-speed image sequences allowed to track a dense field of displacement on the slope surface. These outcomes have been compared to the ones obtained from accelerometers for cross-validation. Images were also used for the photogrammetric reconstruction of the slope topography during the experiment. Consequently, volume computation and mass movements could be evaluated on the basis of processed images.; Figure 1 - The landslide simulation platform at Tongji University at the end of an experiment. The picture shows the body of simulated landslide.
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... signal simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and... signal simulations or via relative accuracy testing. (vi) Perform leak checks monthly. (vii) Perform...
Code of Federal Regulations, 2013 CFR
2013-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... signal simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and... signal simulations or via relative accuracy testing. (vi) Perform leak checks monthly. (vii) Perform...
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature simulation devices. (v) Conduct a visual inspection of each sensor every quarter if redundant... signal simulations or via relative accuracy testing. (v) Conduct an accuracy audit every quarter and... signal simulations or via relative accuracy testing. (vi) Perform leak checks monthly. (vii) Perform...
Terrain Commander: a next-generation remote surveillance system
NASA Astrophysics Data System (ADS)
Finneral, Henry J.
2003-09-01
Terrain Commander is a fully automated forward observation post that provides the most advanced capability in surveillance and remote situational awareness. The Terrain Commander system was selected by the Australian Government for its NINOX Phase IIB Unattended Ground Sensor Program with the first systems delivered in August of 2002. Terrain Commander offers next generation target detection using multi-spectral peripheral sensors coupled with autonomous day/night image capture and processing. Subsequent intelligence is sent back through satellite communications with unlimited range to a highly sophisticated central monitoring station. The system can "stakeout" remote locations clandestinely for 24 hours a day for months at a time. With its fully integrated SATCOM system, almost any site in the world can be monitored from virtually any other location in the world. Terrain Commander automatically detects and discriminates intruders by precisely cueing its advanced EO subsystem. The system provides target detection capabilities with minimal nuisance alarms combined with the positive visual identification that authorities demand before committing a response. Terrain Commander uses an advanced beamforming acoustic sensor and a distributed array of seismic, magnetic and passive infrared sensors to detect, capture images and accurately track vehicles and personnel. Terrain Commander has a number of emerging military and non-military applications including border control, physical security, homeland defense, force protection and intelligence gathering. This paper reviews the development, capabilities and mission applications of the Terrain Commander system.
NASA Astrophysics Data System (ADS)
Harness, Anthony; Cash, Webster; Shipley, Ann; Glassman, Tiffany; Warwick, Steve
2013-09-01
We review the progress on the New Worlds Airship project, which has the eventual goal of suborbitally mapping the Alpha Centauri planetary system into the Habitable Zone. This project consists of a telescope viewing a star that is occulted by a starshade suspended from an airship. The starshade suppresses the starlight such that fainter planetary objects near the star are revealed. A visual sensor is used to determine the position of the starshade and keep the telescope within the starshade's shadow. In the first attempt to demonstrate starshades through astronomical observations, we have built a precision line of sight position indicator and flew it on a Zeppelin in October (2012). Since the airship provider went out of business we have been redesigning the project to use Vertical Takeoff Vertical Landing rockets instead. These Suborbital Reusable Launch Vehicles will serve as a starshade platform and test bed for further development of the visual sensor. We have completed ground tests of starshades on dry lakebeds and have shown excellent contrast. We are now attempting to use starshades on hilltops to occult stars and perform high contrast imaging of outer planetary systems such as the debris disk around Fomalhaut.
Pieralisi, Marco; Di Mattia, Valentina; Petrini, Valerio; De Leo, Alfredo; Manfredi, Giovanni; Russo, Paola; Scalise, Lorenzo; Cerri, Graziano
2017-02-16
Currently, the availability of technology developed to increase the autonomy of visually impaired athletes during sports is limited. The research proposed in this paper (Part I and Part II) focuses on the realization of an electromagnetic system that can guide a blind runner along a race track without the need for a sighted guide. In general, the system is composed of a transmitting unit (widely described in Part I) and a receiving unit, whose components and main features are described in this paper. Special attention is paid to the definition of an electromagnetic model able to faithfully represent the physical mechanisms of interaction between the two units, as well as between the receiving magnetic sensor and the body of the user wearing the device. This theoretical approach allows for an estimation of the signals to be detected, and guides the design of a suitable signal processing board. This technology has been realized, patented, and tested with a blind volunteer with successful results and this paper presents interesting suggestions for further improvements.
Pieralisi, Marco; Di Mattia, Valentina; Petrini, Valerio; De Leo, Alfredo; Manfredi, Giovanni; Russo, Paola; Scalise, Lorenzo; Cerri, Graziano
2017-01-01
Currently, the availability of technology developed to increase the autonomy of visually impaired athletes during sports is limited. The research proposed in this paper (Part I and Part II) focuses on the realization of an electromagnetic system that can guide a blind runner along a race track without the need for a sighted guide. In general, the system is composed of a transmitting unit (widely described in Part I) and a receiving unit, whose components and main features are described in this paper. Special attention is paid to the definition of an electromagnetic model able to faithfully represent the physical mechanisms of interaction between the two units, as well as between the receiving magnetic sensor and the body of the user wearing the device. This theoretical approach allows for an estimation of the signals to be detected, and guides the design of a suitable signal processing board. This technology has been realized, patented, and tested with a blind volunteer with successful results and this paper presents interesting suggestions for further improvements. PMID:28212348
Methods and Apparatus for Autonomous Robotic Control
NASA Technical Reports Server (NTRS)
Gorshechnikov, Anatoly (Inventor); Livitz, Gennady (Inventor); Versace, Massimiliano (Inventor); Palma, Jesse (Inventor)
2017-01-01
Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements.
A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors
Song, Yu; Nuske, Stephen; Scherer, Sebastian
2016-01-01
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. PMID:28025524
NASA Astrophysics Data System (ADS)
Schima, Robert; Goblirsch, Tobias; Paschen, Mathias; Rinke, Karsten; Schelwat, Heinz; Dietrich, Peter; Bumberger, Jan
2016-04-01
The impact of global change, intensive agriculture and complex interactions between humans and the environment show different effects on different scales. However, the desire to obtain a better understanding of ecosystems and process dynamics in nature accentuates the need for observing these processes in higher temporal and spatial resolutions. Especially with regard to the process dynamics and heterogeneity of water catchment areas, a comprehensive monitoring of the ongoing processes and effects remains to be a challenging issue in the field of applied environmental research. Moreover, harsh conditions and a variety of influencing process parameters are representing a particular challenge due to an adaptive in-situ monitoring of vast areas. Today, open source based electronics and cost-effective sensors and sensor components are offering a promising approach to investigate new possibilities of smart phone based mobile data acquisition and comprehensive ad-hoc monitoring of environmental processes. Accordingly, our project aims the development of new strategies for mobile data acquisition and real-time processing of user-specific environmental data, based on a holistic and integrated process. To this end, the concept of our monitoring system covers the data collection, data processing and data integration as well as the data provision within one infrastructure. The whole monitoring system consists of several mobile sensor devices, a smart phone app (Android) and a web service for data processing, data provision and data visualization. The smart phone app allows the configuration of the mobile sensor device and provides some built-in functions such as data visualization or data transmission via e-mail. Besides the measurement of temperature and humidity in air, the mobile sensor device is able to acquire sensor readings for the content of dissolved organic compounds (λ = 254 nm) and turbidity (λ = 860 nm) of surface water based on the developed optical in-situ sensor probe. Here, the miniaturized optical sensor probe allows the monitoring of even shallow water bodies with a depth of less than 5 cm. Compared to common techniques, the inexpensive sensor parts and robust emitting LEDs allow an improved widespread and comprehensive monitoring due to a higher amount of sensor devices. Furthermore, the system consists of a GPS module, a real-time clock and a GSM unit which allow space and time resolved measurements. On October 6th, 2015 an initial experiment was started at the Bode catchment in the Harz region (Germany). Here, the developed DOC and turbidity sensor probes were installed directly at the riverside next to existing sampling points of a large-scaled long-term observation project. The results show a good correspondence between our sensor development and the installed and established instruments. This represents a decisive and cost-effective contribution in the area of environmental research and the monitoring of vast catchment areas.
Acting to gain information: Real-time reasoning meets real-time perception
NASA Technical Reports Server (NTRS)
Rosenschein, Stan
1994-01-01
Recent advances in intelligent reactive systems suggest new approaches to the problem of deriving task-relevant information from perceptual systems in real time. The author will describe work in progress aimed at coupling intelligent control mechanisms to real-time perception systems, with special emphasis on frame rate visual measurement systems. A model for integrated reasoning and perception will be discussed, and recent progress in applying these ideas to problems of sensor utilization for efficient recognition and tracking will be described.
Normann, R.A.; Kadlec, E.R.
1994-11-08
A downhole telemetry system is described for optically communicating to the surface operating parameters of a drill bit during ongoing drilling operations. The downhole telemetry system includes sensors mounted with a drill bit for monitoring at least one operating parameter of the drill bit and generating a signal representative thereof. The downhole telemetry system includes means for transforming and optically communicating the signal to the surface as well as means at the surface for producing a visual display of the optically communicated operating parameters of the drill bit. 7 figs.
Normann, Randy A.; Kadlec, Emil R.
1994-01-01
A downhole telemetry system is described for optically communicating to the surface operating parameters of a drill bit during ongoing drilling operations. The downhole telemetry system includes sensors mounted with a drill bit for monitoring at least one operating parameter of the drill bit and generating a signal representative thereof. The downhole telemetry system includes means for transforming and optically communicating the signal to the surface as well as means at the surface for producing a visual display of the optically communicated operating parameters of the drill bit.
Visualization and Analysis for Near-Real-Time Decision Making in Distributed Workflows
Pugmire, David; Kress, James; Choi, Jong; ...
2016-08-04
Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less
NASA Astrophysics Data System (ADS)
Jamlos, Mohd Aminudin; Ismail, Abdul Hafiizh; Jamlos, Mohd Faizal; Narbudowicz, Adam
2017-01-01
Hybrid graphene-copper ultra-wideband array sensor applied to microwave imaging technique is successfully used in detecting and visualizing tumor inside human brain. The sensor made of graphene coated film for the patch while copper for both the transmission line and parasitic element. The hybrid sensor performance is better than fully copper sensor. Hybrid sensor recorded wider bandwidth of 2.0-10.1 GHz compared with fully copper sensor operated from 2.5 to 10.1 GHz. Higher gain of 3.8-8.5 dB is presented by hybrid sensor, while fully copper sensor stated lower gain ranging from 2.6 to 6.7 dB. Both sensors recorded excellent total efficiency averaged at 97 and 94%, respectively. The sensor used for both transmits equivalent signal and receives backscattering signal from stratified human head model in detecting tumor. Difference in the data of the scattering parameters recorded from the head model with presence and absence of tumor is used as the main data to be further processed in confocal microwave imaging algorithm in generating image. MATLAB software is utilized to analyze S-parameter signals obtained from measurement. Tumor presence is indicated by lower S-parameter values compared to higher values recorded by tumor absence.
A laser-based vision system for weld quality inspection.
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.
D'Angelo, Lorenzo T; Schneider, Michael; Neugebauer, Paul; Lueth, Tim C
2011-01-01
In this contribution, a new concept for interfacing sensor network nodes (motes) and smartphones is presented for the first time. In the last years, a variety of telemedicine applications on smartphones for data reception, display and transmission have been developed. However, it is not always practical or possible to have a smartphone application running continuously to accomplish these tasks. The presented system allows receiving and storing data continuously using a mote and visualizing or sending it on the go using the smartphone as user interface only when desired. Thus, the processes of data reception and storage run on a safe system consuming less energy and the smartphone's potential along with its battery are not demanded continuously. Both, system concept and realization with an Apple iPhone are presented.
A Laser-Based Vision System for Weld Quality Inspection
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugmire, David; Kress, James; Choi, Jong
Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less
Force sensor attachable to thin fiberscopes/endoscopes utilizing high elasticity fabric.
Watanabe, Tetsuyou; Iwai, Takanobu; Fujihira, Yoshinori; Wakako, Lina; Kagawa, Hiroyuki; Yoneyama, Takeshi
2014-03-12
An endoscope/fiberscope is a minimally invasive tool used for directly observing tissues in areas deep inside the human body where access is limited. However, this tool only yields visual information. If force feedback information were also available, endoscope/fiberscope operators would be able to detect indurated areas that are visually hard to recognize. Furthermore, obtaining such feedback information from tissues in areas where collecting visual information is a challenge would be highly useful. The major obstacle is that such force information is difficult to acquire. This paper presents a novel force sensing system that can be attached to a very thin fiberscope/endoscope. To ensure a small size, high resolution, easy sterilization, and low cost, the proposed force visualization-based system uses a highly elastic material-panty stocking fabric. The paper also presents the methodology for deriving the force value from the captured image. The system has a resolution of less than 0.01 N and sensitivity of greater than 600 pixels/N within the force range of 0-0.2 N.
A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.
Mung, Jay; Vignon, Francois; Jain, Ameet
2011-01-01
In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact.
NASA Astrophysics Data System (ADS)
Sorvin, Michail; Belyakova, Svetlana; Stoikov, Ivan; Shamagsumova, Rezeda; Evtugyn, Gennady
2018-04-01
Electronic tongue is a sensor array that aims to discriminate and analyze complex media like food and beverages on the base of chemometrics approaches for data mining and pattern recognition. In this review, the concept of electronic tongue comprising of solid-contact potentiometric sensors with polyaniline and thacalix[4]arene derivatives is described. The electrochemical reactions of polyaniline as a background of solid-contact sensors and the characteristics of thiacalixarenes and pillararenes as neutral ionophores are briefly considered. The electronic tongue systems described were successfully applied for assessment of fruit juices, green tea, beer and alcoholic drinks They were classified in accordance with the origination, brands and styles. Variation of the sensor response resulted from the reactions between Fe(III) ions added and sample components, i.e., antioxidants and complexing agents. The use of principal component analysis and discriminant analysis is shown for multisensor signal treatment and visualization. The discrimination conditions can be optimized by variation of the ionophores, Fe(III) concentration and sample dilution. The results obtained were compared with other electronic tongue systems reported for the same subjects.
Sorvin, Michail; Belyakova, Svetlana; Stoikov, Ivan; Shamagsumova, Rezeda; Evtugyn, Gennady
2018-01-01
Electronic tongue is a sensor array that aims to discriminate and analyze complex media like food and beverages on the base of chemometrics approaches for data mining and pattern recognition. In this review, the concept of electronic tongue comprising of solid-contact potentiometric sensors with polyaniline and thacalix[4]arene derivatives is described. The electrochemical reactions of polyaniline as a background of solid-contact sensors and the characteristics of thiacalixarenes and pillararenes as neutral ionophores are briefly considered. The electronic tongue systems described were successfully applied for assessment of fruit juices, green tea, beer, and alcoholic drinks They were classified in accordance with the origination, brands and styles. Variation of the sensor response resulted from the reactions between Fe(III) ions added and sample components, i.e., antioxidants and complexing agents. The use of principal component analysis and discriminant analysis is shown for multisensor signal treatment and visualization. The discrimination conditions can be optimized by variation of the ionophores, Fe(III) concentration, and sample dilution. The results obtained were compared with other electronic tongue systems reported for the same subjects.
Sorvin, Michail; Belyakova, Svetlana; Stoikov, Ivan; Shamagsumova, Rezeda; Evtugyn, Gennady
2018-01-01
Electronic tongue is a sensor array that aims to discriminate and analyze complex media like food and beverages on the base of chemometrics approaches for data mining and pattern recognition. In this review, the concept of electronic tongue comprising of solid-contact potentiometric sensors with polyaniline and thacalix[4]arene derivatives is described. The electrochemical reactions of polyaniline as a background of solid-contact sensors and the characteristics of thiacalixarenes and pillararenes as neutral ionophores are briefly considered. The electronic tongue systems described were successfully applied for assessment of fruit juices, green tea, beer, and alcoholic drinks They were classified in accordance with the origination, brands and styles. Variation of the sensor response resulted from the reactions between Fe(III) ions added and sample components, i.e., antioxidants and complexing agents. The use of principal component analysis and discriminant analysis is shown for multisensor signal treatment and visualization. The discrimination conditions can be optimized by variation of the ionophores, Fe(III) concentration, and sample dilution. The results obtained were compared with other electronic tongue systems reported for the same subjects. PMID:29740577
Ma, Christina Zong-Hao; Wong, Duo Wai-Chi; Lam, Wing Kai; Wan, Anson Hong-Ping; Lee, Winson Chiu-Chun
2016-03-25
Falls and fall-induced injuries are major global public health problems. Balance and gait disorders have been the second leading cause of falls. Inertial motion sensors and force sensors have been widely used to monitor both static and dynamic balance performance. Based on the detected performance, instant visual, auditory, electrotactile and vibrotactile biofeedback could be provided to augment the somatosensory input and enhance balance control. This review aims to synthesize the research examining the effect of biofeedback systems, with wearable inertial motion sensors and force sensors, on balance performance. Randomized and non-randomized clinical trials were included in this review. All studies were evaluated based on the methodological quality. Sample characteristics, device design and study characteristics were summarized. Most previous studies suggested that biofeedback devices were effective in enhancing static and dynamic balance in healthy young and older adults, and patients with balance and gait disorders. Attention should be paid to the choice of appropriate types of sensors and biofeedback for different intended purposes. Maximizing the computing capacity of the micro-processer, while minimizing the size of the electronic components, appears to be the future direction of optimizing the devices. Wearable balance-improving devices have their potential of serving as balance aids in daily life, which can be used indoors and outdoors.
Ma, Christina Zong-Hao; Wong, Duo Wai-Chi; Lam, Wing Kai; Wan, Anson Hong-Ping; Lee, Winson Chiu-Chun
2016-01-01
Falls and fall-induced injuries are major global public health problems. Balance and gait disorders have been the second leading cause of falls. Inertial motion sensors and force sensors have been widely used to monitor both static and dynamic balance performance. Based on the detected performance, instant visual, auditory, electrotactile and vibrotactile biofeedback could be provided to augment the somatosensory input and enhance balance control. This review aims to synthesize the research examining the effect of biofeedback systems, with wearable inertial motion sensors and force sensors, on balance performance. Randomized and non-randomized clinical trials were included in this review. All studies were evaluated based on the methodological quality. Sample characteristics, device design and study characteristics were summarized. Most previous studies suggested that biofeedback devices were effective in enhancing static and dynamic balance in healthy young and older adults, and patients with balance and gait disorders. Attention should be paid to the choice of appropriate types of sensors and biofeedback for different intended purposes. Maximizing the computing capacity of the micro-processer, while minimizing the size of the electronic components, appears to be the future direction of optimizing the devices. Wearable balance-improving devices have their potential of serving as balance aids in daily life, which can be used indoors and outdoors. PMID:27023558
Health monitoring of offshore structures using wireless sensor network: experimental investigations
NASA Astrophysics Data System (ADS)
Chandrasekaran, Srinivasan; Chitambaram, Thailammai
2016-04-01
This paper presents a detailed methodology of deploying wireless sensor network in offshore structures for structural health monitoring (SHM). Traditional SHM is carried out by visual inspections and wired systems, which are complicated and requires larger installation space to deploy while decommissioning is a tedious process. Wireless sensor networks can enhance the art of health monitoring with deployment of scalable and dense sensor network, which consumes lesser space and lower power consumption. Proposed methodology is mainly focused to determine the status of serviceability of large floating platforms under environmental loads using wireless sensors. Data acquired by the servers will analyze the data for their exceedance with respect to the threshold values. On failure, SHM architecture will trigger an alarm or an early warning in the form of alert messages to alert the engineer-in-charge on board; emergency response plans can then be subsequently activated, which shall minimize the risk involved apart from mitigating economic losses occurring from the accidents. In the present study, wired and wireless sensors are installed in the experimental model and the structural response, acquired is compared. The wireless system comprises of Raspberry pi board, which is programmed to transmit the acquired data to the server using Wi-Fi adapter. Data is then hosted in the webpage for further post-processing, as desired.
RGB-D SLAM Combining Visual Odometry and Extended Information Filter
Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue
2015-01-01
In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990
Detection and recognition of simple spatial forms
NASA Technical Reports Server (NTRS)
Watson, A. B.
1983-01-01
A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.
Real-time new satellite product demonstration from microwave sensors and GOES-16 at NRL TC web
NASA Astrophysics Data System (ADS)
Cossuth, J.; Richardson, K.; Surratt, M. L.; Bankert, R.
2017-12-01
The Naval Research Laboratory (NRL) Tropical Cyclone (TC) satellite webpage (https://www.nrlmry.navy.mil/TC.html) provides demonstration analyses of storm imagery to benefit operational TC forecast centers around the world. With the availability of new spectral information provided by GOES-16 satellite data and recent research into improved visualization methods of microwave data, experimental imagery was operationally tested to visualize the structural changes of TCs during the 2017 hurricane season. This presentation provides an introduction into these innovative satellite analysis methods, NRL's next generation satellite analysis system (the Geolocated Information Processing System, GeoIPSTM), and demonstration the added value of additional spectral frequencies when monitoring storms in near-realtime.
Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.
Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell
2011-06-01
This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.
Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery
Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell
2013-01-01
This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information. PMID:24398557
Point Cloud Generation from sUAS-Mounted iPhone Imagery: Performance Analysis
NASA Astrophysics Data System (ADS)
Ladai, A. D.; Miller, J.
2014-11-01
The rapidly growing use of sUAS technology and fast sensor developments continuously inspire mapping professionals to experiment with low-cost airborne systems. Smartphones has all the sensors used in modern airborne surveying systems, including GPS, IMU, camera, etc. Of course, the performance level of the sensors differs by orders, yet it is intriguing to assess the potential of using inexpensive sensors installed on sUAS systems for topographic applications. This paper focuses on the quality analysis of point clouds generated based on overlapping images acquired by an iPhone 5s mounted on a sUAS platform. To support the investigation, test data was acquired over an area with complex topography and varying vegetation. In addition, extensive ground control, including GCPs and transects were collected with GSP and traditional geodetic surveying methods. The statistical and visual analysis is based on a comparison of the UAS data and reference dataset. The results with the evaluation provide a realistic measure of data acquisition system performance. The paper also gives a recommendation for data processing workflow to achieve the best quality of the final products: the digital terrain model and orthophoto mosaic. After a successful data collection the main question is always the reliability and the accuracy of the georeferenced data.
Thermal Image Sensing Model for Robotic Planning and Search.
Castro Jiménez, Lídice E; Martínez-García, Edgar A
2016-08-08
This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.
Membrane-mirror-based autostereoscopic display for tele-operation and teleprescence applications
NASA Astrophysics Data System (ADS)
McKay, Stuart; Mair, Gordon M.; Mason, Steven; Revie, Kenneth
2000-05-01
An autostereoscopic display for telepresence and tele- operation applications has been developed at the University of Strathclyde in Glasgow, Scotland. The research is a collaborative effort between the Imaging Group and the Transparent Telepresence Research Group, both based at Strathclyde. A key component of the display is the directional screen; a 1.2-m diameter Stretchable Membrane Mirror is currently used. This patented technology enables large diameter, small f No., mirrors to be produced at a fraction of the cost of conventional optics. Another key element of the present system is an anthropomorphic and anthropometric stereo camera sensor platform. Thus, in addition to mirror development, research areas include sensor platform design focused on sight, hearing, research areas include sensor platform design focused on sight, hearing, and smell, telecommunications, display systems for all visual, aural and other senses, tele-operation, and augmented reality. The sensor platform is located at the remote site and transmits live video to the home location. Applications for this technology are as diverse as they are numerous, ranging from bomb disposal and other hazardous environment applications to tele-conferencing, sales, education and entertainment.
Phenoliner: A New Field Phenotyping Platform for Grapevine Research
Kicherer, Anna; Herzog, Katja; Bendel, Nele; Klück, Hans-Christian; Backhaus, Andreas; Wieland, Markus; Klingbeil, Lasse; Läbe, Thomas; Hohl, Christian; Petry, Willi; Kuhlmann, Heiner; Seiffert, Udo; Töpfer, Reinhard
2017-01-01
In grapevine research the acquisition of phenotypic data is largely restricted to the field due to its perennial nature and size. The methodologies used to assess morphological traits and phenology are mainly limited to visual scoring. Some measurements for biotic and abiotic stress, as well as for quality assessments, are done by invasive measures. The new evolving sensor technologies provide the opportunity to perform non-destructive evaluations of phenotypic traits using different field phenotyping platforms. One of the biggest technical challenges for field phenotyping of grapevines are the varying light conditions and the background. In the present study the Phenoliner is presented, which represents a novel type of a robust field phenotyping platform. The vehicle is based on a grape harvester following the concept of a moveable tunnel. The tunnel it is equipped with different sensor systems (RGB and NIR camera system, hyperspectral camera, RTK-GPS, orientation sensor) and an artificial broadband light source. It is independent from external light conditions and in combination with artificial background, the Phenoliner enables standardised acquisition of high-quality, geo-referenced sensor data. PMID:28708080
Phenoliner: A New Field Phenotyping Platform for Grapevine Research.
Kicherer, Anna; Herzog, Katja; Bendel, Nele; Klück, Hans-Christian; Backhaus, Andreas; Wieland, Markus; Rose, Johann Christian; Klingbeil, Lasse; Läbe, Thomas; Hohl, Christian; Petry, Willi; Kuhlmann, Heiner; Seiffert, Udo; Töpfer, Reinhard
2017-07-14
In grapevine research the acquisition of phenotypic data is largely restricted to the field due to its perennial nature and size. The methodologies used to assess morphological traits and phenology are mainly limited to visual scoring. Some measurements for biotic and abiotic stress, as well as for quality assessments, are done by invasive measures. The new evolving sensor technologies provide the opportunity to perform non-destructive evaluations of phenotypic traits using different field phenotyping platforms. One of the biggest technical challenges for field phenotyping of grapevines are the varying light conditions and the background. In the present study the Phenoliner is presented, which represents a novel type of a robust field phenotyping platform. The vehicle is based on a grape harvester following the concept of a moveable tunnel. The tunnel it is equipped with different sensor systems (RGB and NIR camera system, hyperspectral camera, RTK-GPS, orientation sensor) and an artificial broadband light source. It is independent from external light conditions and in combination with artificial background, the Phenoliner enables standardised acquisition of high-quality, geo-referenced sensor data.
Black light - How sensors filter spectral variation of the illuminant
NASA Technical Reports Server (NTRS)
Brainard, David H.; Wandell, Brian A.; Cowan, William B.
1989-01-01
Visual sensor responses may be used to classify objects on the basis of their surface reflectance functions. In a color image, the image data are represented as a vector of sensor responses at each point in the image. This vector depends both on the surface reflectance functions and on the spectral power distribution of the ambient illumination. Algorithms designed to classify objects on the basis of their surface reflectance functions typically attempt to overcome the dependence of the sensor responses on the illuminant by integrating sensor data collected from multiple surfaces. In machine vision applications, it is shown that it is often possible to design the sensor spectral responsivities so that the vector direction of the sensor responses does not depend upon the illuminant. The conditions under which this is possible are given and an illustrative calculation is performed. In biological systems, where the sensor responsivities are fixed, it is shown that some changes in the illumination cause no change in the sensor responses. Such changes in illuminant are called black illuminants. It is possible to express any illuminant as the sum of two unique components. One component is a black illuminant. The second component is called the visible component. The visible component of an illuminant completely characterizes the effect of the illuminant on the vector of sensor responses.
Self-organizing neural integration of pose-motion features for human action recognition
Parisi, German I.; Weber, Cornelius; Wermter, Stefan
2015-01-01
The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
NASA Technical Reports Server (NTRS)
Imhoff, Marc; Lawrence, William; Condit, Richard; Wright, Joseph; Johnson, Patrick; Holford, Warren; Hyer, Joseph; May, Lisa; Carson, Steven
2000-01-01
A synthetic aperture radar sensor operating in 5 bands between 80 and 120 MHz was flown over forested areas in the canal zone of the Republic of Panama in an experiment to measure biomass in heavy tropical forests. The sensor is a pulse coherent SAR flown on a small aircraft and oriented straight down. The doppler history is processed to collect data on the ground in rectangular cells of varying size over a range of incidence angles fore and aft of nadir (+45 to - 45 degrees). Sensor data consists of 5 frequency bands with 20 incidence angles per band. Sensor data for over 12+ sites were collected with forest stands having biomass densities ranging from 50 to 300 tons/ha dry above ground biomass. Results are shown exploring the biomass saturation thresholds using these frequencies, the system design is explained, and preliminary attempts at data visualization using this unique sensor design are described.
Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan
2016-11-15
Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.
Light-Addressable Potentiometric Sensors for Quantitative Spatial Imaging of Chemical Species.
Yoshinobu, Tatsuo; Miyamoto, Ko-Ichiro; Werner, Carl Frederik; Poghossian, Arshak; Wagner, Torsten; Schöning, Michael J
2017-06-12
A light-addressable potentiometric sensor (LAPS) is a semiconductor-based chemical sensor, in which a measurement site on the sensing surface is defined by illumination. This light addressability can be applied to visualize the spatial distribution of pH or the concentration of a specific chemical species, with potential applications in the fields of chemistry, materials science, biology, and medicine. In this review, the features of this chemical imaging sensor technology are compared with those of other technologies. Instrumentation, principles of operation, and various measurement modes of chemical imaging sensor systems are described. The review discusses and summarizes state-of-the-art technologies, especially with regard to the spatial resolution and measurement speed; for example, a high spatial resolution in a submicron range and a readout speed in the range of several tens of thousands of pixels per second have been achieved with the LAPS. The possibility of combining this technology with microfluidic devices and other potential future developments are discussed.
Sensory-based expert monitoring and control
NASA Astrophysics Data System (ADS)
Yen, Gary G.
1999-03-01
Field operators use their eyes, ears, and nose to detect process behavior and to trigger corrective control actions. For instance: in daily practice, the experienced operator in sulfuric acid treatment of phosphate rock may observe froth color or bubble character to control process material in-flow. Or, similarly, (s)he may use acoustic sound of cavitation or boiling/flashing to increase or decrease material flow rates in tank levels. By contrast, process control computers continue to be limited to taking action on P, T, F, and A signals. Yet, there is sufficient evidence from the fields that visual and acoustic information can be used for control and identification. Smart in-situ sensors have facilitated potential mechanism for factory automation with promising industry applicability. In respond to these critical needs, a generic, structured health monitoring approach is proposed. The system assumes a given sensor suite will act as an on-line health usage monitor and at best provide the real-time control autonomy. The sensor suite can incorporate various types of sensory devices, from vibration accelerometers, directional microphones, machine vision CCDs, pressure gauges to temperature indicators. The decision can be shown in a visual on-board display or fed to the control block to invoke controller reconfigurration.
NASA Technical Reports Server (NTRS)
Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)
1998-01-01
Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.
Theory research of seam recognition and welding torch pose control based on machine vision
NASA Astrophysics Data System (ADS)
Long, Qiang; Zhai, Peng; Liu, Miao; He, Kai; Wang, Chunyang
2017-03-01
At present, the automation requirement of the welding become higher, so a method of the welding information extraction by vision sensor is proposed in this paper, and the simulation with the MATLAB has been conducted. Besides, in order to improve the quality of robot automatic welding, an information retrieval method for welding torch pose control by visual sensor is attempted. Considering the demands of welding technology and engineering habits, the relative coordinate systems and variables are strictly defined, and established the mathematical model of the welding pose, and verified its feasibility by using the MATLAB simulation in the paper, these works lay a foundation for the development of welding off-line programming system with high precision and quality.
Iowa Flood Information System: Towards Integrated Data Management, Analysis and Visualization
NASA Astrophysics Data System (ADS)
Demir, I.; Krajewski, W. F.; Goska, R.; Mantilla, R.; Weber, L. J.; Young, N.
2012-04-01
The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, flood-related data, information and interactive visualizations for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS by streaming data from automated IFC bridge sensors, USGS stream gauges, NEXRAD radars, and NWS forecasts. Simple 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. The IFIS includes a rainfall-runoff forecast model to provide a five-day flood risk estimate for around 500 communities in Iowa. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods. This presentation provides an overview and live demonstration of the tools and interfaces in the IFIS developed to date to provide a platform for one-stop access to flood related data, visualizations, flood conditions, and forecast.
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-01-01
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-08-19
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.
2009-02-06
that could monitor sensors, evaluate environmental 4 conditions, and control visual and sound devices was conducted. The home automation products used...the prototype system. Use of off-the-shelf home automation products allowed the implementation of an egress control prototype suitable for test and
Code of Federal Regulations, 2013 CFR
2013-07-01
... the atmosphere. (ii) Car-seal or lock-and-key valve closures. Secure any bypass line valve in the closed position with a car-seal or a lock-and-key type configuration. You must visually inspect the seal... sensor. (vii) At least monthly, inspect components for integrity and electrical connections for...
Code of Federal Regulations, 2014 CFR
2014-07-01
... the atmosphere. (ii) Car-seal or lock-and-key valve closures. Secure any bypass line valve in the closed position with a car-seal or a lock-and-key type configuration. You must visually inspect the seal... sensor. (vii) At least monthly, inspect components for integrity and electrical connections for...
NASA Tech Briefs, October 2007
NASA Technical Reports Server (NTRS)
2007-01-01
Topics covered include; Wirelessly Interrogated Position or Displacement Sensors; Ka-Band Radar Terminal Descent Sensor; Metal/Metal Oxide Differential Electrode pH Sensors; Improved Sensing Coils for SQUIDs; Inductive Linear-Position Sensor/Limit-Sensor Units; Hilbert-Curve Fractal Antenna With Radiation- Pattern Diversity; Single-Camera Panoramic-Imaging Systems; Interface Electronic Circuitry for an Electronic Tongue; Inexpensive Clock for Displaying Planetary or Sidereal Time; Efficient Switching Arrangement for (N + 1)/N Redundancy; Lightweight Reflectarray Antenna for 7.115 and 32 GHz; Opto-Electronic Oscillator Using Suppressed Phase Modulation; Alternative Controller for a Fiber-Optic Switch; Strong, Lightweight, Porous Materials; Nanowicks; Lightweight Thermal Protection System for Atmospheric Entry; Rapid and Quiet Drill; Hydrogen Peroxide Concentrator; MMIC Amplifiers for 90 to 130 GHz; Robot Would Climb Steep Terrain; Measuring Dynamic Transfer Functions of Cavitating Pumps; Advanced Resistive Exercise Device; Rapid Engineering of Three-Dimensional, Multicellular Tissues With Polymeric Scaffolds; Resonant Tunneling Spin Pump; Enhancing Spin Filters by Use of Bulk Inversion Asymmetry; Optical Magnetometer Incorporating Photonic Crystals; WGM-Resonator/Tapered-Waveguide White-Light Sensor Optics; Raman-Suppressing Coupling for Optical Parametric Oscillator; CO2-Reduction Primary Cell for Use on Venus; Cold Atom Source Containing Multiple Magneto- Optical Traps; POD Model Reconstruction for Gray-Box Fault Detection; System for Estimating Horizontal Velocity During Descent; Software Framework for Peer Data-Management Services; Autogen Version 2.0; Tracking-Data-Conversion Tool; NASA Enterprise Visual Analysis; Advanced Reference Counting Pointers for Better Performance; C Namelist Facility; and Efficient Mosaicking of Spitzer Space Telescope Images.
Practical life log video indexing based on content and context
NASA Astrophysics Data System (ADS)
Tancharoen, Datchakorn; Yamasaki, Toshihiko; Aizawa, Kiyoharu
2006-01-01
Today, multimedia information has gained an important role in daily life and people can use imaging devices to capture their visual experiences. In this paper, we present our personal Life Log system to record personal experiences in form of wearable video and environmental data; in addition, an efficient retrieval system is demonstrated to recall the desirable media. We summarize the practical video indexing techniques based on Life Log content and context to detect talking scenes by using audio/visual cues and semantic key frames from GPS data. Voice annotation is also demonstrated as a practical indexing method. Moreover, we apply body media sensors to record continuous life style and use body media data to index the semantic key frames. In the experiments, we demonstrated various video indexing results which provided their semantic contents and showed Life Log visualizations to examine personal life effectively.
How do plants see the world? - UV imaging with a TiO2 nanowire array by artificial photosynthesis.
Kang, Ji-Hoon; Leportier, Thibault; Park, Min-Chul; Han, Sung Gyu; Song, Jin-Dong; Ju, Hyunsu; Hwang, Yun Jeong; Ju, Byeong-Kwon; Poon, Ting-Chung
2018-05-10
The concept of plant vision refers to the fact that plants are receptive to their visual environment, although the mechanism involved is quite distinct from the human visual system. The mechanism in plants is not well understood and has yet to be fully investigated. In this work, we have exploited the properties of TiO2 nanowires as a UV sensor to simulate the phenomenon of photosynthesis in order to come one step closer to understanding how plants see the world. To the best of our knowledge, this study is the first approach to emulate and depict plant vision. We have emulated the visual map perceived by plants with a single-pixel imaging system combined with a mechanical scanner. The image acquisition has been demonstrated for several electrolyte environments, in both transmissive and reflective configurations, in order to explore the different conditions in which plants perceive light.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
Research on pressure tactile sensing technology based on fiber Bragg grating array
NASA Astrophysics Data System (ADS)
Song, Jinxue; Jiang, Qi; Huang, Yuanyang; Li, Yibin; Jia, Yuxi; Rong, Xuewen; Song, Rui; Liu, Hongbin
2015-09-01
A pressure tactile sensor based on the fiber Bragg grating (FBG) array is introduced in this paper, and the numerical simulation of its elastic body was implemented by finite element software (ANSYS). On the basis of simulation, fiber Bragg grating strings were implanted in flexible silicone to realize the sensor fabrication process, and a testing system was built. A series of calibration tests were done via the high precision universal press machine. The tactile sensor array perceived external pressure, which is demodulated by the fiber grating demodulation instrument, and three-dimension pictures were programmed to display visually the position and size. At the same time, a dynamic contact experiment of the sensor was conducted for simulating robot encountering other objects in the unknown environment. The experimental results show that the sensor has good linearity, repeatability, and has the good effect of dynamic response, and its pressure sensitivity was 0.03 nm/N. In addition, the sensor also has advantages of anti-electromagnetic interference, good flexibility, simple structure, low cost and so on, which is expected to be used in the wearable artificial skin in the future.
Design and performance of an integrated ground and space sensor web for monitoring active volcanoes.
NASA Astrophysics Data System (ADS)
Lahusen, Richard; Song, Wenzhan; Kedar, Sharon; Shirazi, Behrooz; Chien, Steve; Doubleday, Joshua; Davies, Ashley; Webb, Frank; Dzurisin, Dan; Pallister, John
2010-05-01
An interdisciplinary team of computer, earth and space scientists collaborated to develop a sensor web system for rapid deployment at active volcanoes. The primary goals of this Optimized Autonomous Space In situ Sensorweb (OASIS) are to: 1) integrate complementary space and in situ (ground-based) elements into an interactive, autonomous sensor web; 2) advance sensor web power and communication resource management technology; and 3) enable scalability for seamless addition sensors and other satellites into the sensor web. This three-year project began with a rigorous multidisciplinary interchange that resulted in definition of system requirements to guide the design of the OASIS network and to achieve the stated project goals. Based on those guidelines, we have developed fully self-contained in situ nodes that integrate GPS, seismic, infrasonic and lightning (ash) detection sensors. The nodes in the wireless sensor network are linked to the ground control center through a mesh network that is highly optimized for remote geophysical monitoring. OASIS also features an autonomous bidirectional interaction between ground nodes and instruments on the EO-1 space platform through continuous analysis and messaging capabilities at the command and control center. Data from both the in situ sensors and satellite-borne hyperspectral imaging sensors stream into a common database for real-time visualization and analysis by earth scientists. We have successfully completed a field deployment of 15 nodes within the crater and on the flanks of Mount St. Helens, Washington. The demonstration that sensor web technology facilitates rapid network deployments and that we can achieve real-time continuous data acquisition. We are now optimizing component performance and improving user interaction for additional deployments at erupting volcanoes in 2010.
Autonomous vision networking: miniature wireless sensor networks with imaging technology
NASA Astrophysics Data System (ADS)
Messinger, Gioia; Goldberg, Giora
2006-09-01
The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.
Modular multiple sensors information management for computer-integrated surgery.
Vaccarella, Alberto; Enquobahrie, Andinet; Ferrigno, Giancarlo; Momi, Elena De
2012-09-01
In the past 20 years, technological advancements have modified the concept of modern operating rooms (ORs) with the introduction of computer-integrated surgery (CIS) systems, which promise to enhance the outcomes, safety and standardization of surgical procedures. With CIS, different types of sensor (mainly position-sensing devices, force sensors and intra-operative imaging devices) are widely used. Recently, the need for a combined use of different sensors raised issues related to synchronization and spatial consistency of data from different sources of information. In this study, we propose a centralized, multi-sensor management software architecture for a distributed CIS system, which addresses sensor information consistency in both space and time. The software was developed as a data server module in a client-server architecture, using two open-source software libraries: Image-Guided Surgery Toolkit (IGSTK) and OpenCV. The ROBOCAST project (FP7 ICT 215190), which aims at integrating robotic and navigation devices and technologies in order to improve the outcome of the surgical intervention, was used as the benchmark. An experimental protocol was designed in order to prove the feasibility of a centralized module for data acquisition and to test the application latency when dealing with optical and electromagnetic tracking systems and ultrasound (US) imaging devices. Our results show that a centralized approach is suitable for minimizing synchronization errors; latency in the client-server communication was estimated to be 2 ms (median value) for tracking systems and 40 ms (median value) for US images. The proposed centralized approach proved to be adequate for neurosurgery requirements. Latency introduced by the proposed architecture does not affect tracking system performance in terms of frame rate and limits US images frame rate at 25 fps, which is acceptable for providing visual feedback to the surgeon in the OR. Copyright © 2012 John Wiley & Sons, Ltd.
Welcome to health information science and systems.
Zhang, Yanchun
2013-01-01
Health Information Science and Systems is an exciting, new, multidisciplinary journal that aims to use technologies in computer science to assist in disease diagnoses, treatment, prediction and monitoring through the modeling, design, development, visualization, integration and management of health related information. These computer-science technologies include such as information systems, web technologies, data mining, image processing, user interaction and interface, sensors and wireless networking and are applicable to a wide range of health related information including medical data, biomedical data, bioinformatics data, public health data.
Inferring Interaction Force from Visual Information without Using Physical Force Sensors.
Hwang, Wonjun; Lim, Soo-Chul
2017-10-26
In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.
Inertial Orientation Trackers with Drift Compensation
NASA Technical Reports Server (NTRS)
Foxlin, Eric M.
2008-01-01
A class of inertial-sensor systems with drift compensation has been invented for use in measuring the orientations of human heads (and perhaps other, similarly sized objects). These systems can be designed to overcome some of the limitations of prior orientation-measuring systems that are based, variously, on magnetic, optical, mechanical-linkage, and acoustical principles. The orientation signals generated by the systems of this invention could be used for diverse purposes, including controlling head-orientation-dependent virtual reality visual displays or enabling persons whose limbs are paralyzed to control machinery by means of head motions. The inventive concept admits to variations too numerous to describe here, making it necessary to limit this description to a typical system, the selected aspects of which are illustrated in the figure. A set of sensors is mounted on a bracket on a band or a cap that gently but firmly grips the wearer s head to be tracked. Among the sensors are three drift-sensitive rotationrate sensors (e.g., integrated-circuit angular- rate-measuring gyroscopes), which put out DC voltages nominally proportional to the rates of rotation about their sensory axes. These sensors are mounted in mutually orthogonal orientations for measuring rates of rotation about the roll, pitch, and yaw axes of the wearer s head. The outputs of these rate sensors are conditioned and digitized, and the resulting data are fed to an integrator module implemented in software in a digital computer. In the integrator module, the angular-rate signals are jointly integrated by any of several established methods to obtain a set of angles that represent approximately the orientation of the head in an external, inertial coordinate system. Because some drift is always present as a component of an angular position computed by integrating the outputs of angular-rate sensors, the orientation signal is processed further in a drift-compensator software module.
Advanced control techniques for teleoperation in earth orbit
NASA Technical Reports Server (NTRS)
Bejczy, A. K.; Brooks, T. L.
1980-01-01
Emerging teleoperation tasks in space invite advancements in teleoperator control technology. This paper briefly summarizes the generic issues related to earth orbital applications of teleoperators, and describes teleoperator control technology development work including visual and non-visual sensors and displays, kinesthetic feedback and computer-aided controls. Performance experiments were carried out using sensor and computer aided controls with promising results which are briefly summarized.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander
2017-05-01
The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.
Noncontact Monitoring of Respiration by Dynamic Air-Pressure Sensor.
Takarada, Tohru; Asada, Tetsunosuke; Sumi, Yoshihisa; Higuchi, Yoshinori
2015-01-01
We have previously reported that a dynamic air-pressure sensor system allows respiratory status to be visually monitored for patients in minimally clothed condition. The dynamic air-pressure sensor measures vital information using changes in air pressure. To utilize this device in the field, we must clarify the influence of clothing conditions on measurement. The present study evaluated use of the dynamic air-pressure sensor system as a respiratory monitor that can reliably detect change in breathing patterns irrespective of clothing. Twelve healthy volunteers reclined on a dental chair positioned horizontally with the sensor pad for measuring air-pressure signals corresponding to respiration placed on the seat back of the dental chair in the central lumbar region. Respiratory measurements were taken under 2 conditions: (a) thinly clothed (subject lying directly on the sensor pad); and (b) thickly clothed (subject lying on the sensor pad covered with a pressure-reducing sheet). Air-pressure signals were recorded and time integration values for air pressure during each expiration were calculated. This information was compared with expiratory tidal volume measured simultaneously by a respirometer connected to the subject via face mask. The dynamic air-pressure sensor was able to receive the signal corresponding to respiration regardless of clothing conditions. A strong correlation was identified between expiratory tidal volume and time integration values for air pressure during each expiration for all subjects under both clothing conditions (0.840-0.988 for the thinly clothed condition and 0.867-0.992 for the thickly clothed condition). These results show that the dynamic air-pressure sensor is useful for monitoring respiratory physiology irrespective of clothing.
Performance testing of collision-avoidance system for power wheelchairs.
Lopresti, Edmund F; Sharma, Vinod; Simpson, Richard C; Mostowy, L Casimir
2011-01-01
The Drive-Safe System (DSS) is a collision-avoidance system for power wheelchairs designed to support people with mobility impairments who also have visual, upper-limb, or cognitive impairments. The DSS uses a distributed approach to provide an add-on, shared-control, navigation-assistance solution. In this project, the DSS was tested for engineering goals such as sensor coverage, maximum safe speed, maximum detection distance, and power consumption while the wheelchair was stationary or driven by an investigator. Results indicate that the DSS provided uniform, reliable sensor coverage around the wheelchair; detected obstacles as small as 3.2 mm at distances of at least 1.6 m; and attained a maximum safe speed of 4.2 km/h. The DSS can drive reliably as close as 15.2 cm from a wall, traverse doorways as narrow as 81.3 cm without interrupting forward movement, and reduce wheelchair battery life by only 3%. These results have implications for a practical system to support safe, independent mobility for veterans who acquire multiple disabilities during Active Duty or later in life. These tests indicate that a system utilizing relatively low cost ultrasound, infrared, and force sensors can effectively detect obstacles in the vicinity of a wheelchair.
An IR Sensor Based Smart System to Approximate Core Body Temperature.
Ray, Partha Pratim
2017-08-01
Herein demonstrated experiment studies two methods, namely convection and body resistance, to approximate human core body temperature. The proposed system is highly energy efficient that consumes only 165 mW power and runs on 5 VDC source. The implemented solution employs an IR thermographic sensor of industry grade along with AT Mega 328 breakout board. Ordinarily, the IR sensor is placed 1.5-30 cm away from human forehead (i.e., non-invasive) and measured the raw data in terms of skin and ambient temperature which is then converted using appropriate approximation formula to find out core body temperature. The raw data is plotted, visualized, and stored instantaneously in a local machine by means of two tools such as Makerplot, and JAVA-JAR. The test is performed when human object is in complete rest and after 10 min of walk. Achieved results are compared with the CoreTemp CM-210 sensor (by Terumo, Japan) which is calculated to be 0.7 °F different from the average value of BCT, obtained by the proposed IR sensor system. Upon a slight modification, the presented model can be connected with a remotely placed Internet of Things cloud service, which may be useful to inform and predict the user's core body temperature through a probabilistic view. It is also comprehended that such system can be useful as wearable device to be worn on at the hat attachable way.
ESB-based Sensor Web integration for the prediction of electric power supply system vulnerability.
Stoimenov, Leonid; Bogdanovic, Milos; Bogdanovic-Dinic, Sanja
2013-08-15
Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB)-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application.
ESB-Based Sensor Web Integration for the Prediction of Electric Power Supply System Vulnerability
Stoimenov, Leonid; Bogdanovic, Milos; Bogdanovic-Dinic, Sanja
2013-01-01
Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB)-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application. PMID:23955435
A Real-Time Ultraviolet Radiation Imaging System Using an Organic Photoconductive Image Sensor†
Okino, Toru; Yamahira, Seiji; Yamada, Shota; Hirose, Yutaka; Odagawa, Akihiro; Kato, Yoshihisa; Tanaka, Tsuyoshi
2018-01-01
We have developed a real time ultraviolet (UV) imaging system that can visualize both invisible UV light and a visible (VIS) background scene in an outdoor environment. As a UV/VIS image sensor, an organic photoconductive film (OPF) imager is employed. The OPF has an intrinsically higher sensitivity in the UV wavelength region than those of conventional consumer Complementary Metal Oxide Semiconductor (CMOS) image sensors (CIS) or Charge Coupled Devices (CCD). As particular examples, imaging of hydrogen flame and of corona discharge is demonstrated. UV images overlapped on background scenes are simply made by on-board background subtraction. The system is capable of imaging weaker UV signals by four orders of magnitude than that of VIS background. It is applicable not only to future hydrogen supply stations but also to other UV/VIS monitor systems requiring UV sensitivity under strong visible radiation environment such as power supply substations. PMID:29361742
Shilemay, Moshe; Rozban, Daniel; Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S; Yadid-Pecht, Orly; Abramovich, Amir
2013-03-01
Inexpensive millimeter-wavelength (MMW) optical digital imaging raises a challenge of evaluating the imaging performance and image quality because of the large electromagnetic wavelengths and pixel sensor sizes, which are 2 to 3 orders of magnitude larger than those of ordinary thermal or visual imaging systems, and also because of the noisiness of the inexpensive glow discharge detectors that compose the focal-plane array. This study quantifies the performances of this MMW imaging system. Its point-spread function and modulation transfer function were investigated. The experimental results and the analysis indicate that the image quality of this MMW imaging system is limited mostly by the noise, and the blur is dominated by the pixel sensor size. Therefore, the MMW image might be improved by oversampling, given that noise reduction is achieved. Demonstration of MMW image improvement through oversampling is presented.
Monitoring Architectural Heritage by Wireless Sensors Networks: San Gimignano — A Case Study
Mecocci, Alessandro; Abrardo, Andrea
2014-01-01
This paper describes a wireless sensor network (WSN) used to monitor the health state of architectural heritage in real-time. The WSN has been deployed and tested on the “Rognosa” tower in the medieval village of San Gimignano, Tuscany, Italy. This technology, being non-invasive, mimetic, and long lasting, is particularly well suited for long term monitoring and on-line diagnosis of the conservation state of heritage buildings. The proposed monitoring system comprises radio-equipped nodes linked to suitable sensors capable of monitoring crucial parameters like: temperature, humidity, masonry cracks, pouring rain, and visual light. The access to data is granted by a user interface for remote control. The WSN can autonomously send remote alarms when predefined thresholds are reached. PMID:24394600
NASA Astrophysics Data System (ADS)
Edwards, Warren S.; Ritchie, Cameron J.; Kim, Yongmin; Mack, Laurence A.
1995-04-01
We have developed a three-dimensional (3D) imaging system using power Doppler (PD) ultrasound (US). This system can be used for visualizing and analyzing the vascular anatomy of parenchymal organs. To create the 3D PD images, we acquired a series of two-dimensional PD images from a commercial US scanner and recorded the position and orientation of each image using a 3D magnetic position sensor. Three-dimensional volumes were reconstructed using specially designed software and then volume rendered for display. We assessed the feasibility and geometric accuracy of our system with various flow phantoms. The system was then tested on a volunteer by scanning a transplanted kidney. The reconstructed volumes of the flow phantom contained less than 1 mm of geometric distortion and the 3D images of the transplanted kidney depicted the segmental, arcuate, and interlobar vessels.
NASA Tech Briefs, October 2005
NASA Technical Reports Server (NTRS)
2005-01-01
Topics covered include: Insect-Inspired Optical-Flow Navigation Sensors; Chemical Sensors Based on Optical Ring Resonators; A Broad-Band Phase-Contrast Wave-Front Sensor; Progress in Insect-Inspired Optical Navigation Sensors; Portable Airborne Laser System Measures Forest-Canopy Height; Deployable Wide-Aperture Array Antennas; Faster Evolution of More Multifunctional Logic Circuits; Video-Camera-Based Position-Measuring System; N-Type delta Doping of High-Purity Silicon Imaging Arrays; Avionics System Architecture Tool; Updated Chemical Kinetics and Sensitivity Analysis Code; Predicting Flutter and Forced Response in Turbomachinery; Upgrades of Two Computer Codes for Analysis of Turbomachinery; Program Facilitates CMMI Appraisals; Grid Visualization Tool; Program Computes Sound Pressures at Rocket Launches; Solar-System Ephemeris Toolbox; Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras; Corrosion-Prevention Capabilities of a Water-Borne, Silicone-Based, Primerless Coating; Sol-Gel Process for Making Pt-Ru Fuel-Cell Catalysts; Making Activated Carbon for Storing Gas; System Regulates the Water Contents of Fuel-Cell Streams; Five-Axis, Three-Magnetic-Bearing Dynamic Spin Rig; Modifications of Fabrication of Vibratory Microgyroscopes; Chamber for Growing and Observing Fungi; Electroporation System for Sterilizing Water; Thermoelectric Air/Soil Energy-Harvesting Device; Flexible Metal-Fabric Radiators; Actuated Hybrid Mirror Telescope; Optical Design of an Optical Communications Terminal; Algorithm for Identifying Erroneous Rain-Gauge Readings; Condition Assessment and End-of-Life Prediction System for Electric Machines and Their Loads; Lightweight Thermal Insulation for a Liquid-Oxygen Tank; Stellar Gyroscope for Determining Attitude of a Spacecraft; and Lifting Mechanism for the Mars Explorer Rover.
Wang, Qi-Xian; Xue, Shi-Fan; Chen, Zi-Han; Ma, Shi-Hui; Zhang, Shengqiang; Shi, Guoyue; Zhang, Min
2017-08-15
In this work, a novel time-resolved ratiometric fluorescent probe based on dual lanthanide (Tb: terbium, and Eu: europium)-doped complexes (Tb/DPA@SiO 2 -Eu/GMP) has been designed for detecting anthrax biomarker (dipicolinic acid, DPA), a unique and major component of anthrax spores. In such complexes-based probe, Tb/DPA@SiO 2 can serve as a stable reference signal with green fluorescence and Eu/GMP act as a sensitive response signal with red fluorescence for ratiometric fluorescent sensing DPA. Additionally, the probe exhibits long fluorescence lifetime, which can significantly reduce the autofluorescence interferences from biological samples by using time-resolved fluorescence measurement. More significantly, a paper-based visual sensor for DPA has been devised by using filter paper embedded with Tb/DPA@SiO 2 -Eu/GMP, and we have proved its utility for fluorescent detection of DPA, in which only a handheld UV lamp is used. In the presence of DPA, the paper-based visual sensor, illuminated by a handheld UV lamp, would result in an obvious fluorescence color change from green to red, which can be easily observed with naked eyes. The paper-based visual sensor is stable, portable, disposable, cost-effective and easy-to-use. The feasibility of using a smartphone with easy-to-access color-scanning APP as the detection platform for quantitative scanometric assays has been also demonstrated by coupled with our proposed paper-based visual sensor. This work unveils an effective method for accurate, sensitive and selective monitoring anthrax biomarker with backgroud-free and self-calibrating properties. Copyright © 2017 Elsevier B.V. All rights reserved.
New type of standalone gas sensors based on dye, thin films, and subwavelength structures
NASA Astrophysics Data System (ADS)
Schnieper, Marc; Davoine, Laurent; Holgado, Miguel; Casquel del Campo, Rafael; Barranco, Angel
2009-02-01
A new gas sensor was developed to enable visual indication of a contamination by specific gases like NO2, SO2, UV, etc. The sensor works with a combination of subwavelength structures and specific active dye thin film layers. The objective is to use the optical changes of the dye thin films after exposure and a custom designed subwavelength structure, a suited combination of both will produce a strong color change. The indication should be visible for the human eye. To enhance this visual aspect, we used a reference sensor sealed into a non-contaminated atmosphere. This work was realized within the PHODYE STREP Project, a collaboration of the 6th Framework Program Priority Information Society Technologies.
Energy optimization in mobile sensor networks
NASA Astrophysics Data System (ADS)
Yu, Shengwei
Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.
Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network
NASA Astrophysics Data System (ADS)
Ong, Jia Jan; Ang, L.-M.; Seng, K. P.
This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.
Climate Outreach Using Regional Coastal Ocean Observing System Portals
NASA Astrophysics Data System (ADS)
Anderson, D. M.; Hernandez, D. L.; Wakely, A.; Bochenek, R. J.; Bickel, A.
2015-12-01
Coastal oceans are dynamic, changing environments affected by processes ranging from seconds to millennia. On the east and west coast of the U.S., regional observing systems have deployed and sustained a remarkable diverse array of observing tools and sensors. Data portals visualize and provide access to real-time sensor networks. Portals have emerged as an interactive tool for educators to help students explore and understand climate. Bringing data portals to outreach events, into classrooms, and onto tablets and smartphones enables educators to address topics and phenomena happening right now. For example at the 2015 Charleston Science Technology Engineering and Math (STEM) Festival, visitors navigated the SECOORA (Southeast Coastal Ocean Observing regional Association) data portal to view the real-time marine meteorological conditions off South Carolina. Map-based entry points provide an intuitive interface for most students, an array of time series and other visualizations depict many of the essential principles of climate science manifest in the coastal zone, and data down-load/ extract options provide access to the data and documentation for further inquiry by advanced users. Beyond the exposition of climate principles, the portal experience reveals remarkable technologies in action and shows how the observing system is enabled by the activity of many different partners.
Present and future of vision systems technologies in commercial flight operations
NASA Astrophysics Data System (ADS)
Ward, Jim
2016-05-01
The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.
Development of the navigation system for visually impaired.
Harada, Tetsuya; Kaneko, Yuki; Hirahara, Yoshiaki; Yanashima, Kenji; Magatani, Kazushige
2004-01-01
A white cane is a typical support instrument for the visually impaired. They use a white cane for the detection of obstacles while walking. So, the area where they have a mental map, they can walk using white cane without the help of others. However, they cannot walk independently in the unknown area, even if they use a white cane. Because, a white cane is a detecting device for obstacles and not a navigation device for their correct route. Now, we are developing the navigation system for the visually impaired which uses indoor space. In Japan, sometimes colored guide lines to the destination is used for a normal person. These lines are attached on the floor, we can reach the destination, if we walk along one of these line. In our system, a developed new white cane senses one colored guide line, and make notice to an user by vibration. This system recognizes the line of the color stuck on the floor by the optical sensor attached in the white cane. And in order to guide still more smoothly, infrared beacons (optical beacon), which can perform voice guidance, are also used.
A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever
NASA Technical Reports Server (NTRS)
Magee, Michael
1993-01-01
The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities.
Statistical data mining of streaming motion data for fall detection in assistive environments.
Tasoulis, S K; Doukas, C N; Maglogiannis, I; Plagianakos, V P
2011-01-01
The analysis of human motion data is interesting for the purpose of activity recognition or emergency event detection, especially in the case of elderly or disabled people living independently in their homes. Several techniques have been proposed for identifying such distress situations using either motion, audio or video sensors on the monitored subject (wearable sensors) or the surrounding environment. The output of such sensors is data streams that require real time recognition, especially in emergency situations, thus traditional classification approaches may not be applicable for immediate alarm triggering or fall prevention. This paper presents a statistical mining methodology that may be used for the specific problem of real time fall detection. Visual data captured from the user's environment, using overhead cameras along with motion data are collected from accelerometers on the subject's body and are fed to the fall detection system. The paper includes the details of the stream data mining methodology incorporated in the system along with an initial evaluation of the achieved accuracy in detecting falls.
Integration of Kinect and Low-Cost Gnss for Outdoor Navigation
NASA Astrophysics Data System (ADS)
Pagliaria, D.; Pinto, L.; Reguzzoni, M.; Rossi, L.
2016-06-01
Since its launch on the market, Microsoft Kinect sensor has represented a great revolution in the field of low cost navigation, especially for indoor robotic applications. In fact, this system is endowed with a depth camera, as well as a visual RGB camera, at a cost of about 200. The characteristics and the potentiality of the Kinect sensor have been widely studied for indoor applications. The second generation of this sensor has been announced to be capable of acquiring data even outdoors, under direct sunlight. The task of navigating passing from an indoor to an outdoor environment (and vice versa) is very demanding because the sensors that work properly in one environment are typically unsuitable in the other one. In this sense the Kinect could represent an interesting device allowing bridging the navigation solution between outdoor and indoor. In this work the accuracy and the field of application of the new generation of Kinect sensor have been tested outdoor, considering different lighting conditions and the reflective properties of the emitted ray on different materials. Moreover, an integrated system with a low cost GNSS receiver has been studied, with the aim of taking advantage of the GNSS positioning when the satellite visibility conditions are good enough. A kinematic test has been performed outdoor by using a Kinect sensor and a GNSS receiver and it is here presented.
ERIC Educational Resources Information Center
Supalo, Cary A.; Kreuter, Rodney A.; Musser, Aaron; Han, Josh; Briody, Erika; McArtor, Chip; Gregory, Kyle; Mallouk, Thomas E.
2006-01-01
In order to enable students who are blind and visually impaired to observe chemical changes in solutions, a hand-held device was designed to output light intensity as an audible tone. The submersible audible light sensor (SALS) creates an audio signal by which one can observe reactions in a solution in real time, using standard laboratory…
Chen, Xiaochun; Yu, Shaoming; Yang, Liang; Wang, Jianping; Jiang, Changlong
2016-07-14
The instant and on-site detection of trace aqueous fluoride ions is still a challenge for environmental monitoring and protection. This work demonstrates a new analytical method and its utility of a paper sensor for visual detection of F(-) on the basis of the fluorescence resonance energy transfer (FRET) between photoluminescent graphene oxide (GO) and silver nanoparticles (AgNPs) through the formation of cyclic esters between phenylborinic acid and diol. The fluorescence of GO was quenched by the AgNPs, and trace F(-) can recover the fluorescence of the quenched photoluminescent GO. The increase in fluorescence intensity is proportional to the concentration of F(-) in the range of 0.05-0.55 nM, along with a limit of detection (LOD) as low as 9.07 pM. Following the sensing mechanism, a paper-based sensor for the visual detection of aqueous F(-) has been successfully developed. The paper sensor showed high sensitivity for aqueous F(-), and the LOD could reach as low as 0.1 μM as observed by the naked eye. The very simple and effective strategy reported here could be extended to the visual detection of a wide range of analytes in the environment by the construction of highly efficient FRET nanoprobes.
Gullà, F; Zambelli, P; Bergamaschi, A; Piccoli, B
2007-01-01
The aim of this study is the objective evaluation of the visual effort in 6 public traffic controllers (4 male, 2 female, mean age 29,6), by means of electronic equipment. The electronic equipment quantify the observation distance and the observation time for each controller's occupational visual field. The quantification of these parameters is obtained by the emission of ultrasound at 40 KHz from an emission sensor (placed by the VDT screen) and the ultrasound reception by means of a receiving sensor (placed on the operator's head). The travelling time of the ultrasound (US), as the air speed is known and costant (about 340 m/s), it is used to calculate the distance between the emitting and the receiving sensor. The results show that the visual acuity required is of average level, while accommodation's and convergence's effort vary from average to intense (depending on the visual characteristics of the operator considered), ranging from 26,41 and 43,92% of accommodation and convergence available in each operator. The time actually spent in "near observation within the c.v.p." (Tscr) was maintained in a range from 2h 54' and 4h 05'.
Sensor system for heart sound biomonitor
NASA Astrophysics Data System (ADS)
Maple, Jarrad L.; Hall, Leonard T.; Agzarian, John; Abbott, Derek
1999-09-01
Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually, rather than through a conventional stethoscope. A system whereby a digital stethoscope interfaces directly to a PC will be directly along with signal processing algorithms, adopted. The sensor is based on a noise cancellation microphone, with a 450 Hz bandwidth and is sampled at 2250 samples/sec with 12-bit resolution. Further to this, we discuss for comparison a piezo-based sensor with a 1 kHz bandwidth. A major problem is that the recording of the heart sound into these devices is subject to unwanted background noise which can override the heart sound and results in a poor visual representation. This noise originates from various sources such as skin contact with the stethoscope diaphragm, lung sounds, and other surrounding sounds such as speech. Furthermore we demonstrate a solution using 'wavelet denoising'. The wavelet transform is used because of the similarity between the shape of wavelets and the time-domain shape of a heartbeat sound. Thus coding of the waveform into the wavelet domain is achieved with relatively few wavelet coefficients, in contrast to the many Fourier components that would result from conventional decomposition. We show that the background noise can be dramatically reduced by a thresholding operation in the wavelet domain. The principle is that the background noise codes into many small broadband wavelet coefficients that can be removed without significant degradation of the signal of interest.
Managed traffic evacuation using distributed sensor processing
NASA Astrophysics Data System (ADS)
Ramuhalli, Pradeep; Biswas, Subir
2005-05-01
This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.
NASA Tech Briefs, September 2003
NASA Technical Reports Server (NTRS)
2003-01-01
Topics include: Oxygen-Partial-Pressure Sensor for Aircraft Oxygen Mask; Three-Dimensional Venturi Sensor for Measuring Extreme Winds; Swarms of Micron-Sized Sensors; Monitoring Volcanoes by Use of Air-Dropped Sensor Packages; Capacitive Sensors for Measuring Masses of Cryogenic Fluids; UHF Microstrip Antenna Array for Synthetic- Aperture Radar; Multimode Broad-Band Patch Antennas; 164-GHz MMIC HEMT Frequency Doubler; GPS Position and Heading Circuitry for Ships; Software for Managing Parametric Studies; Software Aids Visualization of Computed Unsteady Flow; Software for Testing Electroactive Structural Components; Advanced Software for Analysis of High-Speed Rolling-Element Bearings; Web Program for Development of GUIs for Cluster Computers; XML-Based Generator of C++ Code for Integration With GUIs; Oxide Protective Coats for Ir/Re Rocket Combustion Chambers; Simplified Waterproofing of Aerogels; Improved Thermal-Insulation Systems for Low Temperatures; Device for Automated Cutting and Transfer of Plant Shoots; Extension of Liouville Formalism to Postinstability Dynamics; Advances in Thrust-Based Emergency Control of an Airplane; Ultrasonic/Sonic Mechanisms for Drilling and Coring; Exercise Device Would Exert Selectable Constant Resistance; Improved Apparatus for Measuring Distance Between Axles; Six Classes of Diffraction-Based Optoelectronic Instruments; Modernizing Fortran 77 Legacy Codes; Active State Model for Autonomous Systems; Shields for Enhanced Protection Against High-Speed Debris; Scaling of Two-Phase Flows to Partial-Earth Gravity; Neutral-Axis Springs for Thin-Wall Integral Boom Hinges.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2014-01-01
Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2014-02-21
Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2012-01-01
Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.
Cardiac-induced localized thoracic motion detected by a fiber optic sensing scheme
NASA Astrophysics Data System (ADS)
Allsop, Thomas; Lloyd, Glynn; Bhamber, Ranjeet S.; Hadzievski, Ljupco; Halliday, Michael; Webb, David J.; Bennion, Ian
2014-11-01
The cardiovascular health of the human population is a major concern for medical clinicians, with cardiovascular diseases responsible for 48% of all deaths worldwide, according to the World Health Organization. The development of new diagnostic tools that are practicable and economical to scrutinize the cardiovascular health of humans is a major driver for clinicians. We offer a new technique to obtain seismocardiographic signals up to 54 Hz covering both ballistocardiography (below 20 Hz) and audible heart sounds (20 Hz upward), using a system based on curvature sensors formed from fiber optic long period gratings. This system can visualize the real-time three-dimensional (3-D) mechanical motion of the heart by using the data from the sensing array in conjunction with a bespoke 3-D shape reconstruction algorithm. Visualization is demonstrated by adhering three to four sensors on the outside of the thorax and in close proximity to the apex of the heart; the sensing scheme revealed a complex motion of the heart wall next to the apex region of the heart. The detection scheme is low-cost, portable, easily operated and has the potential for ambulatory applications.
The IRGen infrared data base modeler
NASA Technical Reports Server (NTRS)
Bernstein, Uri
1993-01-01
IRGen is a modeling system which creates three-dimensional IR data bases for real-time simulation of thermal IR sensors. Starting from a visual data base, IRGen computes the temperature and radiance of every data base surface with a user-specified thermal environment. The predicted gray shade of each surface is then computed from the user specified sensor characteristics. IRGen is based on first-principles models of heat transport and heat flux sources, and it accurately simulates the variations of IR imagery with time of day and with changing environmental conditions. The starting point for creating an IRGen data base is a visual faceted data base, in which every facet has been labeled with a material code. This code is an index into a material data base which contains surface and bulk thermal properties for the material. IRGen uses the material properties to compute the surface temperature at the specified time of day. IRGen also supports image generator features such as texturing and smooth shading, which greatly enhance image realism.
Neuro-inspired smart image sensor: analog Hmax implementation
NASA Astrophysics Data System (ADS)
Paindavoine, Michel; Dubois, Jérôme; Musa, Purnawarman
2015-03-01
Neuro-Inspired Vision approach, based on models from biology, allows to reduce the computational complexity. One of these models - The Hmax model - shows that the recognition of an object in the visual cortex mobilizes V1, V2 and V4 areas. From the computational point of view, V1 corresponds to the area of the directional filters (for example Sobel filters, Gabor filters or wavelet filters). This information is then processed in the area V2 in order to obtain local maxima. This new information is then sent to an artificial neural network. This neural processing module corresponds to area V4 of the visual cortex and is intended to categorize objects present in the scene. In order to realize autonomous vision systems (consumption of a few milliwatts) with such treatments inside, we studied and realized in 0.35μm CMOS technology prototypes of two image sensors in order to achieve the V1 and V2 processing of Hmax model.
Knowledge-Based Vision Techniques for the Autonomous Land Vehicle Program
1991-10-01
Knowledge System The CKS is an object-oriented knowledge database that was originally designed to serve as the central information manager for a...34 Representation Space: An Approach to the Integra- tion of Visual Information ," Proc. of DARPA Image Understanding Workshop, Palo Alto, CA, pp. 263-272, May 1989...Strat, " Information Management in a Sensor-Based Au- tonomous System," Proc. DARPA Image Understanding Workshop, University of Southern CA, Vol.1, pp
Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors
Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke
2016-01-01
This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions. PMID:27049388
Jones, Val; Bults, Richard; de Wijk, Rene; Widya, Ing; Batista, Ricardo; Hermens, Hermie
2011-01-01
An assessment of a sensor designed for monitoring energy expenditure, activity, and sleep was conducted in the context of a research project which develops a weight management application. The overall goal of this project is to affect sustainable behavioural change with respect to diet and exercise in order to improve health and wellbeing. This paper reports results of a pretrial in which three volunteers wore the sensor for a total of 11 days. The aim was to gain experience with the sensor and determine if it would be suitable for incorporation into the ICT system developed by the project to be trialled later on a larger population. In this paper we focus mainly on activity monitoring and user experience. Data and results including visualizations and reports are presented and discussed. User experience proved positive in most respects. Exercise levels and sleep patterns correspond to user logs relating to exercise sessions and sleep patterns. Issues raised relate to accuracy, one source of possible interference, the desirability of enhancing the system with real-time data transmission, and analysis to enable real-time feedback. It is argued that automatic activity classification is needed to properly analyse and interpret physical activity data captured by accelerometry. PMID:21772840
A sensor-less LED dimming system based on daylight harvesting with BIPV systems.
Yoo, Seunghwan; Kim, Jonghun; Jang, Cheol-Yong; Jeong, Hakgeun
2014-01-13
Artificial lighting in office buildings typically requires 30% of the total energy consumption of the building, providing a substantial opportunity for energy savings. To reduce the energy consumed by indoor lighting, we propose a sensor-less light-emitting diode (LED) dimming system using daylight harvesting. In this study, we used light simulation software to quantify and visualize daylight, and analyzed the correlation between photovoltaic (PV) power generation and indoor illumination in an office with an integrated PV system. In addition, we calculated the distribution of daylight illumination into the office and dimming ratios for the individual control of LED lights. Also, we were able directly to use the electric power generated by PV system. As a result, power consumption for electric lighting was reduced by 40 - 70% depending on the season and the weather conditions. Thus, the dimming system proposed in this study can be used to control electric lighting to reduce energy use cost-effectively and simply.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
Environmental Monitoring Using Sensor Networks
NASA Astrophysics Data System (ADS)
Yang, J.; Zhang, C.; Li, X.; Huang, Y.; Fu, S.; Acevedo, M. F.
2008-12-01
Environmental observatories, consisting of a variety of sensor systems, computational resources and informatics, are important for us to observe, model, predict, and ultimately help preserve the health of the nature. The commoditization and proliferation of coin-to-palm sized wireless sensors will allow environmental monitoring with unprecedented fine spatial and temporal resolution. Once scattered around, these sensors can identify themselves, locate their positions, describe their functions, and self-organize into a network. They communicate through wireless channel with nearby sensors and transmit data through multi-hop protocols to a gateway, which can forward information to a remote data server. In this project, we describe an environmental observatory called Texas Environmental Observatory (TEO) that incorporates a sensor network system with intertwined wired and wireless sensors. We are enhancing and expanding the existing wired weather stations to include wireless sensor networks (WSNs) and telemetry using solar-powered cellular modems. The new WSNs will monitor soil moisture and support long-term hydrologic modeling. Hydrologic models are helpful in predicting how changes in land cover translate into changes in the stream flow regime. These models require inputs that are difficult to measure over large areas, especially variables related to storm events, such as soil moisture antecedent conditions and rainfall amount and intensity. This will also contribute to improve rainfall estimations from meteorological radar data and enhance hydrological forecasts. Sensor data are transmitted from monitoring site to a Central Data Collection (CDC) Server. We incorporate a GPRS modem for wireless telemetry, a single-board computer (SBC) as Remote Field Gateway (RFG) Server, and a WSN for distributed soil moisture monitoring. The RFG provides effective control, management, and coordination of two independent sensor systems, i.e., a traditional datalogger-based wired sensor system and the WSN-based wireless sensor system. The RFG also supports remote manipulation of the devices in the field such as the SBC, datalogger, and WSN. Sensor data collected from the distributed monitoring stations are stored in a database (DB) Server. The CDC Server acts as an intermediate component to hide the heterogeneity of different devices and support data validation required by the DB Server. Daemon programs running on the CDC Server pre-process the data before it is inserted into the database, and periodically perform synchronization tasks. A SWE-compliant data repository is installed to enable data exchange, accepting data from both internal DB Server and external sources through the OGC web services. The web portal, i.e. TEO Online, serves as a user-friendly interface for data visualization, analysis, synthesis, modeling, and K-12 educational outreach activities. It also provides useful capabilities for system developers and operators to remotely monitor system status and remotely update software and system configuration, which greatly simplifies the system debugging and maintenance tasks. We also implement Sensor Observation Services (SOS) at this layer, conforming to the SWE standard to facilitate data exchange. The standard SensorML/O&M data representation makes it easy to integrate our sensor data into the existing Geographic Information Systems (GIS) web services and exchange the data with other organizations.
Ground Penetrating Radar as a Contextual Sensor for Multi-Sensor Radiological Characterisation
Ukaegbu, Ikechukwu K.; Gamage, Kelum A. A.
2017-01-01
Radioactive sources exist in environments or contexts that influence how they are detected and localised. For instance, the context of a moving source is different from a stationary source because of the effects of motion. The need to incorporate this contextual information in the radiation detection and localisation process has necessitated the integration of radiological and contextual sensors. The benefits of the successful integration of both types of sensors is well known and widely reported in fields such as medical imaging. However, the integration of both types of sensors has also led to innovative solutions to challenges in characterising radioactive sources in non-medical applications. This paper presents a review of such recent applications. It also identifies that these applications mostly use visual sensors as contextual sensors for characterising radiation sources. However, visual sensors cannot retrieve contextual information about radioactive wastes located in opaque environments encountered at nuclear sites, e.g., underground contamination. Consequently, this paper also examines ground-penetrating radar (GPR) as a contextual sensor for characterising this category of wastes and proposes several ways of integrating data from GPR and radiological sensors. Finally, it demonstrates combined GPR and radiation imaging for three-dimensional localisation of contamination in underground pipes using radiation transport and GPR simulations. PMID:28387706
An evaluation of three-dimensional sensors for the extravehicular activity helper/retreiver
NASA Technical Reports Server (NTRS)
Magee, Michael
1993-01-01
The Extravehicular Activity Retriever/Helper (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, accurate sensing of the operational environment and objects in the environment will therefore be critical. The qualitative and quantitative results of empirical studies of three sensors that are capable of providing three-dimensional information to the EVAHR, but using completely different hardware approaches are documented. The first of these devices is a phase shift laser with an effective operating range (ambiguity interval) of approximately 15 meters. The second sensor is a laser triangulation system designed to operate at much closer range and to provide higher resolution images. The third sensor is a dual camera stereo imaging system from which range images can also be obtained. The remainder of the report characterizes the strengths and weaknesses of each of these systems relative to quality of data extracted and how different object characteristics affect sensor operation.
Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...
2017-02-16
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.; Halsey, William; Dehoff, Ryan
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
Thermal Image Sensing Model for Robotic Planning and Search
Castro Jiménez, Lídice E.; Martínez-García, Edgar A.
2016-01-01
This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image’s intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot’s course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach. PMID:27509510
NASA Astrophysics Data System (ADS)
Sudarmaji, A.; Margiwiyatno, A.; Ediati, R.; Mustofa, A.
2018-05-01
The aroma/vapor of essential oils is complex compound which depends on the content of the gases and volatiles generated from essential oil. This paper describes a design of quick, simple, and low-cost static measurement system to acquire vapor profile of essential oil. The gases and volatiles are captured in a chamber by means of 9 MOS gas sensors which driven with advance temperature modulation technique. A PSoC CY8C28445-24PVXI based-interface unit is built to generate the modulation signal and acquire all sensor output into computer wirelessly via radio frequency serial communication using Digi International Inc., XBee (IEEE 802.15.4) through developed software under Visual.Net. The system was tested to measure 2 kinds of essential oil (Patchouli and Clove Oils) in 4 temperature modulations (without, 0.25 Hz, 1 Hz, and 4 Hz). A cycle measurement consists of reference and sample measurement sequentially which is set during 2 minutes in every 1 second respectively. It is found that the suitable modulation is 0,25Hz; 75%, and the results of Principle Component Analysis show that the system is able to distinguish clearly between Patchouli Oil and Clove Oil.
Road detection and buried object detection in elevated EO/IR imagery
NASA Astrophysics Data System (ADS)
Kennedy, Levi; Kolba, Mark P.; Walters, Joshua R.
2012-06-01
To assist the warfighter in visually identifying potentially dangerous roadside objects, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has developed an elevated video sensor system testbed for data collection. This system provides color and mid-wave infrared (MWIR) imagery. Signal Innovations Group (SIG) has developed an automated processing capability that detects the road within the sensor field of view and identifies potentially threatening buried objects within the detected road. The road detection algorithm leverages system metadata to project the collected imagery onto a flat ground plane, allowing for more accurate detection of the road as well as the direct specification of realistic physical constraints in the shape of the detected road. Once the road has been detected in an image frame, a buried object detection algorithm is applied to search for threatening objects within the detected road space. The buried object detection algorithm leverages textural and pixel intensity-based features to detect potential anomalies and then classifies them as threatening or non-threatening objects. Both the road detection and the buried object detection algorithms have been developed to facilitate their implementation in real-time in the NVESD system.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.
Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe
2017-10-16
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.
Personal Cabin Pressure Monitor and Warning System
NASA Technical Reports Server (NTRS)
Zysko, Jan A. (Inventor)
2002-01-01
A cabin pressure altitude monitor and warning system provides a warning when a detected cabin pressure altitude has reached a predetermined level. The system is preferably embodied in a portable, pager-sized device that can be carried or worn by an individual. A microprocessor calculates the pressure altitude from signals generated by a calibrated pressure transducer and a temperature sensor that compensates for temperature variations in the signals generated by the pressure transducer. The microprocessor is programmed to generate a warning or alarm if a cabin pressure altitude exceeding a predetermined threshold is detected. Preferably, the microprocessor generates two different types of warning or alarm outputs, a first early warning or alert when a first pressure altitude is exceeded. and a second more serious alarm condition when either a second. higher pressure altitude is exceeded, or when the first pressure altitude has been exceeded for a predetermined period of time. Multiple types of alarm condition indicators are preferably provided, including visual, audible and tactile. The system is also preferably designed to detect gas concentrations and other ambient conditions, and thus incorporates other sensors, such as oxygen, relative humidity, carbon dioxide, carbon monoxide and ammonia sensors, to provide a more complete characterization and monitoring of the local environment.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application
Vassallo, Raquel
2017-01-01
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334
Personal Cabin Pressure Monitor and Warning System
NASA Astrophysics Data System (ADS)
Zysko, Jan A.
2002-09-01
A cabin pressure altitude monitor and warning system provides a warning when a detected cabin pressure altitude has reached a predetermined level. The system is preferably embodied in a portable, pager-sized device that can be carried or worn by an individual. A microprocessor calculates the pressure altitude from signals generated by a calibrated pressure transducer and a temperature sensor that compensates for temperature variations in the signals generated by the pressure transducer. The microprocessor is programmed to generate a warning or alarm if a cabin pressure altitude exceeding a predetermined threshold is detected. Preferably, the microprocessor generates two different types of warning or alarm outputs, a first early warning or alert when a first pressure altitude is exceeded. and a second more serious alarm condition when either a second. higher pressure altitude is exceeded, or when the first pressure altitude has been exceeded for a predetermined period of time. Multiple types of alarm condition indicators are preferably provided, including visual, audible and tactile. The system is also preferably designed to detect gas concentrations and other ambient conditions, and thus incorporates other sensors, such as oxygen, relative humidity, carbon dioxide, carbon monoxide and ammonia sensors, to provide a more complete characterization and monitoring of the local environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
These proceedings discuss human factor issues related to aerospace systems, aging, communications, computer systems, consumer products, education and forensic topics, environmental design, industrial ergonomics, international technology transfer, organizational design and management, personality and individual differences in human performance, safety, system development, test and evaluation, training, and visual performance. Particular attention is given to HUDs, attitude indicators, and sensor displays; human factors of space exploration; behavior and aging; the design and evaluation of phone-based interfaces; knowledge acquisition and expert systems; handwriting, speech, and other input techniques; interface design for text, numerics, and speech; and human factor issues in medicine. Also discussedmore » are cumulative trauma disorders, industrial safety, evaluative techniques for automation impacts on the human operators, visual issues in training, and interpreting and organizing human factor concepts and information.« less
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory’s considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm’s performance and ability to process ‘flight-like’ imagery formats with a ‘flight-like’ trajectory, positioning ourselves to easily process flight data from the upcoming ‘ISS Selfie’ activity and then compare the algorithm’s quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system.Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
Visual Odometry for Autonomous Deep-Space Navigation Project
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Autonomous rendezvous and docking (AR&D) is a critical need for manned spaceflight, especially in deep space where communication delays essentially leave crews on their own for critical operations like docking. Previously developed AR&D sensors have been large, heavy, power-hungry, and may still require further development (e.g. Flash LiDAR). Other approaches to vision-based navigation are not computationally efficient enough to operate quickly on slower, flight-like computers. The key technical challenge for visual odometry is to adapt it from the current terrestrial applications it was designed for to function in the harsh lighting conditions of space. This effort leveraged Draper Laboratory's considerable prior development and expertise, benefitting both parties. The algorithm Draper has created is unique from other pose estimation efforts as it has a comparatively small computational footprint (suitable for use onboard a spacecraft, unlike alternatives) and potentially offers accuracy and precision needed for docking. This presents a solution to the AR&D problem that only requires a camera, which is much smaller, lighter, and requires far less power than competing AR&D sensors. We have demonstrated the algorithm's performance and ability to process 'flight-like' imagery formats with a 'flight-like' trajectory, positioning ourselves to easily process flight data from the upcoming 'ISS Selfie' activity and then compare the algorithm's quantified performance to the simulated imagery. This will bring visual odometry beyond TRL 5, proving its readiness to be demonstrated as part of an integrated system. Once beyond TRL 5, visual odometry will be poised to be demonstrated as part of a system in an in-space demo where relative pose is critical, like Orion AR&D, ISS robotic operations, asteroid proximity operations, and more.
EP Profiles Inventor Mark Sherron
ERIC Educational Resources Information Center
Williams, John M.
2006-01-01
This article profiles Mark Jerome Sherron, inventor of the ALLIES Line of electronic sensors for blind and visually-impaired people. Featuring the American Liquid Level Indicator electronic sensor (ALLI), Sherron's ALLIES product line also includes the Light Intensity Level Indicator (LILI), a multi-function electronic light sensor for electronic…
NASA Astrophysics Data System (ADS)
Biswas, Subir; Quwaider, Muhannad
2008-04-01
The physical safety and well being of the soldiers in a battlefield is the highest priority of Incident Commanders. Currently, the ability to track and monitor soldiers rely on visual and verbal communication which can be somewhat limited in scenarios where the soldiers are deployed inside buildings and enclosed areas that are out of visual range of the commanders. Also, the need for being stealth can often prevent a battling soldier to send verbal clues to a commander about his or her physical well being. Sensor technologies can remotely provide various data about the soldiers including physiological monitoring and personal alert safety system functionality. This paper presents a networked sensing solution in which a body area wireless network of multi-modal sensors can monitor the body movement and other physiological parameters for statistical identification of a soldier's body posture, which can then be indicative of the physical conditions and safety alerts of the soldier in question. The specific concept is to leverage on-body proximity sensing and a Hidden Markov Model (HMM) based mechanism that can be applied for stochastic identification of human body postures using a wearable sensor network. The key idea is to collect relative proximity information between wireless sensors that are strategically placed over a subject's body to monitor the relative movements of the body segments, and then to process that using HMM in order to identify the subject's body postures. The key novelty of this approach is a departure from the traditional accelerometry based approaches in which the individual body segment movements, rather than their relative proximity, is used for activity monitoring and posture detection. Through experiments with body mounted sensors we demonstrate that while the accelerometry based approaches can be used for differentiating activity intensive postures such as walking and running, they are not very effective for identification and differentiation between low activity postures such as sitting and standing. We develop a wearable sensor network that monitors relative proximity using Radio Signal Strength indication (RSSI), and then construct a HMM system for posture identification in the presence of sensing errors. Controlled experiments using human subjects were carried out for evaluating the accuracy of the HMM identified postures compared to a naÃve threshold based mechanism, and its variations over different human subjects. A large spectrum of target human postures, including lie down, sit (straight and reclined), stand, walk, run, sprint and stair climbing, are used for validating the proposed system.
On detection and visualization techniques for cyber security situation awareness
NASA Astrophysics Data System (ADS)
Yu, Wei; Wei, Shixiao; Shen, Dan; Blowers, Misty; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe; Zhang, Hanlin; Lu, Chao
2013-05-01
Networking technologies are exponentially increasing to meet worldwide communication requirements. The rapid growth of network technologies and perversity of communications pose serious security issues. In this paper, we aim to developing an integrated network defense system with situation awareness capabilities to present the useful information for human analysts. In particular, we implement a prototypical system that includes both the distributed passive and active network sensors and traffic visualization features, such as 1D, 2D and 3D based network traffic displays. To effectively detect attacks, we also implement algorithms to transform real-world data of IP addresses into images and study the pattern of attacks and use both the discrete wavelet transform (DWT) based scheme and the statistical based scheme to detect attacks. Through an extensive simulation study, our data validate the effectiveness of our implemented defense system.
Communications for unattended sensor networks
NASA Astrophysics Data System (ADS)
Nemeroff, Jay L.; Angelini, Paul; Orpilla, Mont; Garcia, Luis; DiPierro, Stefano
2004-07-01
The future model of the US Army's Future Combat Systems (FCS) and the Future Force reflects a combat force that utilizes lighter armor protection than the current standard. Survival on the future battlefield will be increased by the use of advanced situational awareness provided by unattended tactical and urban sensors that detect, identify, and track enemy targets and threats. Successful implementation of these critical sensor fields requires the development of advanced sensors, sensor and data-fusion processors, and a specialized communications network. To ensure warfighter and asset survivability, the communications must be capable of near real-time dissemination of the sensor data using robust, secure, stealthy, and jam resistant links so that the proper and decisive action can be taken. Communications will be provided to a wide-array of mission-specific sensors that are capable of processing data from acoustic, magnetic, seismic, and/or Chemical, Biological, Radiological, and Nuclear (CBRN) sensors. Other, more powerful, sensor node configurations will be capable of fusing sensor data and intelligently collect and process data images from infrared or visual imaging cameras. The radio waveform and networking protocols being developed under the Soldier Level Integrated Communications Environment (SLICE) Soldier Radio Waveform (SRW) and the Networked Sensors for the Future Force Advanced Technology Demonstration are part of an effort to develop a common waveform family which will operate across multiple tactical domains including dismounted soldiers, ground sensor, munitions, missiles and robotics. These waveform technologies will ultimately be transitioned to the JTRS library, specifically the Cluster 5 requirement.
NASA Astrophysics Data System (ADS)
Chen, Xiaochun; Yu, Shaoming; Yang, Liang; Wang, Jianping; Jiang, Changlong
2016-07-01
The instant and on-site detection of trace aqueous fluoride ions is still a challenge for environmental monitoring and protection. This work demonstrates a new analytical method and its utility of a paper sensor for visual detection of F- on the basis of the fluorescence resonance energy transfer (FRET) between photoluminescent graphene oxide (GO) and silver nanoparticles (AgNPs) through the formation of cyclic esters between phenylborinic acid and diol. The fluorescence of GO was quenched by the AgNPs, and trace F- can recover the fluorescence of the quenched photoluminescent GO. The increase in fluorescence intensity is proportional to the concentration of F- in the range of 0.05-0.55 nM, along with a limit of detection (LOD) as low as 9.07 pM. Following the sensing mechanism, a paper-based sensor for the visual detection of aqueous F- has been successfully developed. The paper sensor showed high sensitivity for aqueous F-, and the LOD could reach as low as 0.1 μM as observed by the naked eye. The very simple and effective strategy reported here could be extended to the visual detection of a wide range of analytes in the environment by the construction of highly efficient FRET nanoprobes.The instant and on-site detection of trace aqueous fluoride ions is still a challenge for environmental monitoring and protection. This work demonstrates a new analytical method and its utility of a paper sensor for visual detection of F- on the basis of the fluorescence resonance energy transfer (FRET) between photoluminescent graphene oxide (GO) and silver nanoparticles (AgNPs) through the formation of cyclic esters between phenylborinic acid and diol. The fluorescence of GO was quenched by the AgNPs, and trace F- can recover the fluorescence of the quenched photoluminescent GO. The increase in fluorescence intensity is proportional to the concentration of F- in the range of 0.05-0.55 nM, along with a limit of detection (LOD) as low as 9.07 pM. Following the sensing mechanism, a paper-based sensor for the visual detection of aqueous F- has been successfully developed. The paper sensor showed high sensitivity for aqueous F-, and the LOD could reach as low as 0.1 μM as observed by the naked eye. The very simple and effective strategy reported here could be extended to the visual detection of a wide range of analytes in the environment by the construction of highly efficient FRET nanoprobes. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr02878k
A Tensor-Based Structural Damage Identification and Severity Assessment
Anaissi, Ali; Makki Alamdari, Mehrisadat; Rakotoarivelo, Thierry; Khoa, Nguyen Lu Dang
2018-01-01
Early damage detection is critical for a large set of global ageing infrastructure. Structural Health Monitoring systems provide a sensor-based quantitative and objective approach to continuously monitor these structures, as opposed to traditional engineering visual inspection. Analysing these sensed data is one of the major Structural Health Monitoring (SHM) challenges. This paper presents a novel algorithm to detect and assess damage in structures such as bridges. This method applies tensor analysis for data fusion and feature extraction, and further uses one-class support vector machine on this feature to detect anomalies, i.e., structural damage. To evaluate this approach, we collected acceleration data from a sensor-based SHM system, which we deployed on a real bridge and on a laboratory specimen. The results show that our tensor method outperforms a state-of-the-art approach using the wavelet energy spectrum of the measured data. In the specimen case, our approach succeeded in detecting 92.5% of induced damage cases, as opposed to 61.1% for the wavelet-based approach. While our method was applied to bridges, its algorithm and computation can be used on other structures or sensor-data analysis problems, which involve large series of correlated data from multiple sensors. PMID:29301314
50 CFR 218.115 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Narrative description of sensors and platforms utilized for marine mammal detection and timeline... sensor; (vi) Length of time observers maintained visual contact with marine mammal; (vii) Wave height...
MEMS high-speed angular-position sensing system with rf wireless transmission
NASA Astrophysics Data System (ADS)
Sun, Winston; Li, Wen J.
2001-08-01
A novel surface-micromachined non-contact high-speed angular-position sensor with total surface area under 4mm2 was developed using the Multi-User MEMS Processes (MUMPs) and integrated with a commercial RF transmitter at 433MHz carrier frequency for wireless signal detection. Currently, a 2.3 MHz internal clock of our data acquisition system and a sensor design with a 13mg seismic mass is sufficient to provide visual observation of a clear sinusoidal response wirelessly generated by the piezoresistive angular-position sensing system within speed range of 180 rpm to around 1000 rpm. Experimental results showed that the oscillation frequency and amplitude are related to the input angular frequency of the rotation disk and the tilt angle of the rotation axis, respectively. These important results could provide groundwork for MEMS researchers to estimate how gravity influences structural properties of MEMS devices under different circumstances.
Pillarisetti, Ajay; Allen, Tracy; Ruiz-Mercado, Ilse; Edwards, Rufus; Chowdhury, Zohir; Garland, Charity; Johnson, Michael; Litton, Charles D.; Lam, Nicholas L.; Pennise, David; Smith, Kirk R.
2017-01-01
Over the last 20 years, the Kirk R. Smith research group at the University of California Berkeley—in collaboration with Electronically Monitored Ecosystems, Berkeley Air Monitoring Group, and other academic institutions—has developed a suite of relatively inexpensive, rugged, battery-operated, microchip-based devices to quantify parameters related to household air pollution. These devices include two generations of particle monitors; data-logging temperature sensors to assess time of use of household energy devices; a time-activity monitoring system using ultrasound; and a CO2-based tracer-decay system to assess ventilation rates. Development of each system involved numerous iterations of custom hardware, software, and data processing and visualization routines along with both lab and field validation. The devices have been used in hundreds of studies globally and have greatly enhanced our understanding of heterogeneous household air pollution (HAP) concentrations and exposures and factors influencing them. PMID:28812989
Pillarisetti, Ajay; Allen, Tracy; Ruiz-Mercado, Ilse; Edwards, Rufus; Chowdhury, Zohir; Garland, Charity; Hill, L Drew; Johnson, Michael; Litton, Charles D; Lam, Nicholas L; Pennise, David; Smith, Kirk R
2017-08-16
Over the last 20 years, the Kirk R. Smith research group at the University of California Berkeley-in collaboration with Electronically Monitored Ecosystems, Berkeley Air Monitoring Group, and other academic institutions-has developed a suite of relatively inexpensive, rugged, battery-operated, microchip-based devices to quantify parameters related to household air pollution. These devices include two generations of particle monitors; data-logging temperature sensors to assess time of use of household energy devices; a time-activity monitoring system using ultrasound; and a CO₂-based tracer-decay system to assess ventilation rates. Development of each system involved numerous iterations of custom hardware, software, and data processing and visualization routines along with both lab and field validation. The devices have been used in hundreds of studies globally and have greatly enhanced our understanding of heterogeneous household air pollution (HAP) concentrations and exposures and factors influencing them.
Fuentes, Ramon; Navarro, Pablo; Curiqueo, Aldo; Ottone, Nicolas E
2015-01-01
The electromagnetic articulograph (EMA) is a device that can collect movement data by positioning sensors at multiple points, measuring displacements of the structure in real time, as well as the acoustics and mechanics of speech using a microphone connected to the measurement system. The aim of this study is to describe protocols for the generation, measurement and visualization of mandibular border and functional movements in the three spatial planes (frontal, sagittal and horizontal) using the EMA. The EMA has transmitter coils that determine magnetic fields to collect information about movements from sensors located on different structures (tongue, palate, mouth, incisors, skin, etc.) and in every direction in an area of 300 mm. After measurement with the EMA, the information is transferred to a computer and read with the Visartico software to visualize the recording of the mandibular movements registered by the EMA. The sensors placed in the space between the three axes XYZ are observed, and then the plots created from the mandibular movements included in the corresponding protocol can be visualized, enabling interpretation of these data. Four protocols for the obtaining of images of the opening and closing mandibular movements were defined and developed, as well as border movements in the frontal, sagittal and horizontal planes, managing to accurately reproduce Posselt's diagram and Gothic arch on the latter two axes. Measurements with the EMA will allow more exact data to be collected in relation to the mandibular clinical physiology and morphology, which will permit more accurate diagnoses and application of more precise and adjusted treatments in the future.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Code of Federal Regulations, 2010 CFR
2010-07-01
... air course or to the surface and equipped with sensors to monitor for heat and for carbon monoxide or smoke. The sensors shall deenergize power to the compressor, activate a visual and audible alarm located... every 31 days, sensors installed to monitor for carbon monoxide shall be calibrated with a known...
Code of Federal Regulations, 2011 CFR
2011-07-01
... air course or to the surface and equipped with sensors to monitor for heat and for carbon monoxide or smoke. The sensors shall deenergize power to the compressor, activate a visual and audible alarm located... every 31 days, sensors installed to monitor for carbon monoxide shall be calibrated with a known...
Code of Federal Regulations, 2013 CFR
2013-07-01
... air course or to the surface and equipped with sensors to monitor for heat and for carbon monoxide or smoke. The sensors shall deenergize power to the compressor, activate a visual and audible alarm located... every 31 days, sensors installed to monitor for carbon monoxide shall be calibrated with a known...
Code of Federal Regulations, 2014 CFR
2014-07-01
... air course or to the surface and equipped with sensors to monitor for heat and for carbon monoxide or smoke. The sensors shall deenergize power to the compressor, activate a visual and audible alarm located... every 31 days, sensors installed to monitor for carbon monoxide shall be calibrated with a known...
Code of Federal Regulations, 2012 CFR
2012-07-01
... air course or to the surface and equipped with sensors to monitor for heat and for carbon monoxide or smoke. The sensors shall deenergize power to the compressor, activate a visual and audible alarm located... every 31 days, sensors installed to monitor for carbon monoxide shall be calibrated with a known...