Practical design and evaluation methods of omnidirectional vision sensors
NASA Astrophysics Data System (ADS)
Ohte, Akira; Tsuzuki, Osamu
2012-01-01
A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.
Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.
Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu
2015-08-01
This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.
Calibration Of An Omnidirectional Vision Navigation System Using An Industrial Robot
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1989-09-01
The characteristics of an omnidirectional vision navigation system were studied to determine position accuracy for the navigation and path control of a mobile robot. Experiments for calibration and other parameters were performed using an industrial robot to conduct repetitive motions. The accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor provided errors of less than 1 pixel on each axis. Linearity between zenith angle and image location was tested at four different locations. Angular error of less than 1° and radial error of less than 1 pixel were observed at moderate speed variations. The experimental information and the test of coordinated operation of the equipment provide understanding of characteristics as well as insight into the evaluation and improvement of the prototype dynamic omnivision system. The calibration of the sensor is important since the accuracy of navigation influences the accuracy of robot motion. This sensor system is currently being developed for a robot lawn mower; however, wider applications are obvious. The significance of this work is that it adds to the knowledge of the omnivision sensor.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System.
Barone, Sandro; Carulli, Marina; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano
2018-01-31
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera.
An Omnidirectional Vision Sensor Based on a Spherical Mirror Catadioptric System
Barone, Sandro; Carulli, Marina; Razionale, Armando Viviano
2018-01-01
The combination of mirrors and lenses, which defines a catadioptric sensor, is widely used in the computer vision field. The definition of a catadioptric sensors is based on three main features: hardware setup, projection modelling and calibration process. In this paper, a complete description of these aspects is given for an omnidirectional sensor based on a spherical mirror. The projection model of a catadioptric system can be described by the forward projection task (FP, from 3D scene point to 2D pixel coordinates) and backward projection task (BP, from 2D coordinates to 3D direction of the incident light). The forward projection of non-central catadioptric vision systems, typically obtained by using curved mirrors, is usually modelled by using a central approximation and/or by adopting iterative approaches. In this paper, an analytical closed-form solution to compute both forward and backward projection for a non-central catadioptric system with a spherical mirror is presented. In particular, the forward projection is reduced to a 4th order polynomial by determining the reflection point on the mirror surface through the intersection between a sphere and an ellipse. A matrix format of the implemented models, suitable for fast point clouds handling, is also described. A robust calibration procedure is also proposed and applied to calibrate a catadioptric sensor by determining the mirror radius and center with respect to the camera. PMID:29385051
Guidance Of A Mobile Robot Using An Omnidirectional Vision Navigation System
NASA Astrophysics Data System (ADS)
Oh, Sung J.; Hall, Ernest L.
1987-01-01
Navigation and visual guidance are key topics in the design of a mobile robot. Omnidirectional vision using a very wide angle or fisheye lens provides a hemispherical view at a single instant that permits target location without mechanical scanning. The inherent image distortion with this view and the numerical errors accumulated from vision components can be corrected to provide accurate position determination for navigation and path control. The purpose of this paper is to present the experimental results and analyses of the imaging characteristics of the omnivision system including the design of robot-oriented experiments and the calibration of raw results. Errors less than one picture element on each axis were observed by testing the accuracy and repeatability of the experimental setup and the alignment between the robot and the sensor. Similar results were obtained for four different locations using corrected results of the linearity test between zenith angle and image location. Angular error of less than one degree and radial error of less than one Y picture element were observed at moderate relative speed. The significance of this work is that the experimental information and the test of coordinated operation of the equipment provide a greater understanding of the dynamic omnivision system characteristics, as well as insight into the evaluation and improvement of the prototype sensor for a mobile robot. Also, the calibration of the sensor is important, since the results provide a cornerstone for future developments. This sensor system is currently being developed for a robot lawn mower.
Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).
Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong
2016-02-06
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.
Jaramillo, Carlos; Valenti, Roberto G.; Guo, Ling; Xiao, Jizhong
2016-01-01
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. PMID:26861351
NASA Astrophysics Data System (ADS)
Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei
2017-10-01
In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.
Self-localization for an autonomous mobile robot based on an omni-directional vision system
NASA Astrophysics Data System (ADS)
Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin
2013-12-01
In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.
Prol, Fabricio dos Santos; El Issaoui, Aimad; Hakala, Teemu
2018-01-01
The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5–7 cm). These results are consistent with those obtained for other PMTS approaches. PMID:29522467
Wang, Ren; Wang, Bing-Zhong; Huang, Wei-Ying; Ding, Xiao
2016-04-16
A compact reconfigurable antenna with an omnidirectional mode and four directional modes is proposed. The antenna has a main radiator and four parasitic elements printed on a dielectric substrate. By changing the status of diodes soldered on the parasitic elements, the proposed antenna can generate four directional radiation patterns and one omnidirectional radiation pattern. The main beam directions of the four directional modes are almost orthogonal and the four directional beams can jointly cover a 360° range in the horizontal plane, i.e., the main radiation plane of omnidirectional mode. The whole volume of the antenna and the control network is approximately 0.70 λ × 0.53 λ × 0.02 λ, where λ is the wavelength corresponding to the center frequency. The proposed antenna has a simple structure and small dimensions under the requirement that the directional radiation patterns can jointly cover the main radiation plane of the omnidirectional mode, therefore, it can be used in smart wireless sensor systems for different application scenarios.
In-plane omnidirectional magnetic field sensor based on Giant Magneto Impedance (GMI)
NASA Astrophysics Data System (ADS)
Díaz-Rubio, Ana; García-Miquel, Héctor; García-Chocano, Víctor Manuel
2017-12-01
In this work the design and characterization of an omnidirectional in-plane magnetic field sensor are presented. The sensor is based on the Giant Magneto Impedance (GMI) effect in glass-coated amorphous microwires of composition (Fe6Co94)72.5Si12.5B15. For the first time, a circular loop made with a microwire is used for giving omnidirectional response. In order to estimate the GMI response of the circular loop we have used a theoretical model of GMI, determining the GMI response as the sum of longitudinal sections with different angles of incidence. As a consequence of the circular loop, the GMI ratio of the sensor is reduced to 15% instead of 100% for the axial GMI response of a microwire. The sensor response has been experimentally verified and the GMI response of the circular loop has been studied as function of the magnetic field, driven current, and frequency. First, we have measured the GMI response of a longitudinal microwire for different angles of incidence, covering the full range between the tangential and perpendicular directions to the microwire axis. Then, using these results, we have experimentally verified the decomposition of a microwire with circular shape as longitudinal segments with different angles of incidence. Finally, we have designed a signal conditioning circuit for the omnidirectional magnetic field sensor. The response of the sensor has been studied as a function of the amplitude of the incident magnetic field.
Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis
Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan
2015-01-01
Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761
Omni-Directional Scanning Localization Method of a Mobile Robot Based on Ultrasonic Sensors.
Mu, Wei-Yi; Zhang, Guang-Peng; Huang, Yu-Mei; Yang, Xin-Gang; Liu, Hong-Yan; Yan, Wen
2016-12-20
Improved ranging accuracy is obtained by the development of a novel ultrasonic sensor ranging algorithm, unlike the conventional ranging algorithm, which considers the divergence angle and the incidence angle of the ultrasonic sensor synchronously. An ultrasonic sensor scanning method is developed based on this algorithm for the recognition of an inclined plate and to obtain the localization of the ultrasonic sensor relative to the inclined plate reference frame. The ultrasonic sensor scanning method is then leveraged for the omni-directional localization of a mobile robot, where the ultrasonic sensors are installed on a mobile robot and follow the spin of the robot, the inclined plate is recognized and the position and posture of the robot are acquired with respect to the coordinate system of the inclined plate, realizing the localization of the robot. Finally, the localization method is implemented into an omni-directional scanning localization experiment with the independently researched and developed mobile robot. Localization accuracies of up to ±3.33 mm for the front, up to ±6.21 for the lateral and up to ±0.20° for the posture are obtained, verifying the correctness and effectiveness of the proposed localization method.
Wang, Ren; Wang, Bing-Zhong; Huang, Wei-Ying; Ding, Xiao
2016-01-01
A compact reconfigurable antenna with an omnidirectional mode and four directional modes is proposed. The antenna has a main radiator and four parasitic elements printed on a dielectric substrate. By changing the status of diodes soldered on the parasitic elements, the proposed antenna can generate four directional radiation patterns and one omnidirectional radiation pattern. The main beam directions of the four directional modes are almost orthogonal and the four directional beams can jointly cover a 360° range in the horizontal plane, i.e., the main radiation plane of omnidirectional mode. The whole volume of the antenna and the control network is approximately 0.70 λ × 0.53 λ × 0.02 λ, where λ is the wavelength corresponding to the center frequency. The proposed antenna has a simple structure and small dimensions under the requirement that the directional radiation patterns can jointly cover the main radiation plane of the omnidirectional mode, therefore, it can be used in smart wireless sensor systems for different application scenarios. PMID:27092512
NASA Astrophysics Data System (ADS)
Gai, V. E.; Polyakov, I. V.; Krasheninnikov, M. S.; Koshurina, A. A.; Dorofeev, R. A.
2017-01-01
Currently, the scientific and educational center of the “Transport” of NNSTU performs work on the creation of the universal rescue vehicle. This vehicle is a robot, and intended to reduce the number of human victims in accidents on offshore oil platforms. An actual problem is the development of a method for determining the location of a person overboard in low visibility conditions, when a traditional vision is not efficient. One of the most important sensory robot systems is the acoustic sensor system, because it is omnidirectional and does not require finding of an acoustic source in visibility scope. Features of the acoustic sensor robot system can complement the capabilities of the video sensor in the solution of the problem of localization of a person or some event in the environment. This paper describes the method of determination of the direction of the acoustic source using just one microphone. The proposed method is based on the active perception theory.
Development of a novel omnidirectional magnetostrictive transducer for plate applications
NASA Astrophysics Data System (ADS)
Vinogradov, Sergey; Cobb, Adam; Bartlett, Jonathan; Udagawa, Youichi
2018-04-01
The application of guided waves for the testing of plate-type structures has been recently investigated by a number of research groups due to the ability of guided waves to detect corrosion in remote and hidden areas. Guided wave sensors for plate applications can be either directed (i.e., the waves propagate in a single direction) or omnidirectional. Each type has certain advantages and disadvantages. Omnidirectional sensors can inspect large areas from a single location, but it is challenging to define where a feature is located. Conversely, directed sensors can be used to precisely locate an indication, but have no sensitivity to flaws away from the wave propagation direction. This work describes a newly developed sensor that combines the strengths of both sensor types to create a novel omnidirectional transducer. The sensor transduction is based on a custom magnetostrictive transducer (MsT). In this new probe design, a directed, plate-application MsT with known characteristics was incorporated into an automated scanner. This scanner rotates the directed MsT for data collection at regular intervals. Coupling of the transducer to the plate is accomplished using a shear wave couplant. The array of data that is received is used for compiling B-scans and imaging, utilizing a synthetic aperture focusing algorithm (SAFT). The performance of the probe was evaluated on a 0.5-inch thick carbon steel plate mockup with a surface area of over 100 square feet. The mockup had a variety of known anomalies representing localized and distributed pitting corrosion, gradual wall thinning, and notches of different depths. Experimental data was also acquired using the new probe on a retired storage tank with known corrosion damage. The performance of the new sensor and its limitations are discussed together with general directions in technology development.
Iconic memory-based omnidirectional route panorama navigation.
Yagi, Yasushi; Imai, Kousuke; Tsuji, Kentaro; Yachida, Masahiko
2005-01-01
A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.
Li, Luyang; Liu, Yun-Hui; Jiang, Tianjiao; Wang, Kai; Fang, Mu
2018-02-01
Despite tremendous efforts made for years, trajectory tracking control (TC) of a nonholonomic mobile robot (NMR) without global positioning system remains an open problem. The major reason is the difficulty to localize the robot by using its onboard sensors only. In this paper, a newly designed adaptive trajectory TC method is proposed for the NMR without its position, orientation, and velocity measurements. The controller is designed on the basis of a novel algorithm to estimate position and velocity of the robot online from visual feedback of an omnidirectional camera. It is theoretically proved that the proposed algorithm yields the TC errors to asymptotically converge to zero. Real-world experiments are conducted on a wheeled NMR to validate the feasibility of the control system.
NASA Astrophysics Data System (ADS)
Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki
2011-12-01
This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.
Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors
Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar
2015-01-01
This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289
ODIS the under-vehicle inspection robot: development status update
NASA Astrophysics Data System (ADS)
Freiburger, Lonnie A.; Smuda, William; Karlsen, Robert E.; Lakshmanan, Sridhar; Ma, Bing
2003-09-01
Unmanned ground vehicle (UGV) technology can be used in a number of ways to assist in counter-terrorism activities. Robots can be employed for a host of terrorism deterrence and detection applications. As reported in last year's Aerosense conference, the U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) and Utah State University (USU) have developed a tele-operated robot called ODIS (Omnidirectional Inspection System) that is particularly effective in performing under-vehicle inspections at security checkpoints. ODIS' continuing development for this task is heavily influenced by feedback received from soldiers and civilian law enforcement personnel using ODIS-prototypes in an operational environment. Our goal is to convince civilian law enforcement and military police to replace the traditional "mirror on a stick" system of looking under cars for bombs and contraband with ODIS. This paper reports our efforts in the past one year in terms of optimizing ODIS for the visual inspection task. Of particular concern is the design of the vision system. This paper documents details on the various issues relating to ODIS' vision system - sensor, lighting, image processing, and display.
Ranjbar, Parivash; Stenström, Ingeborg
2013-01-01
Monitor is a portable vibrotactile aid to improve the ability of people with severe hearing impairment or deafblindness to detect, identify, and recognize the direction of sound-producing events. It transforms and adapts sounds to the frequency sensitivity range of the skin. The aid was evaluated in the field. Four females (44-54 years) with Usher Syndrome I (three with tunnel vision and one with only light perception) tested the aid at home and in traffic in three different field studies: without Monitor, with Monitor with an omnidirectional microphone, and with Monitor with a directional microphone. The tests were video-documented, and the two field studies with Monitor were initiated after five weeks of training. The detection scores with omnidirectional and directional microphones were 100% for three participants and above 57% for one, both in their home and traffic environments. In the home environment the identification scores with the omnidirectional microphone were 70%-97% and 58%-95% with the directional microphone. The corresponding values in traffic were 29%-100% and 65%-100%, respectively. Their direction perception was improved to some extent by both microphones. Monitor improved the ability of people with deafblindness to detect, identify, and recognize the direction of events producing sounds.
HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.
Lin, Huei-Yung; Wang, Min-Liang
2014-09-04
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.
HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots
Lin, Huei-Yung.; Wang, Min-Liang.
2014-01-01
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317
Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan
2017-01-01
In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964
Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan
2017-09-10
In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.
Portable haptic interface with omni-directional movement and force capability.
Avizzano, Carlo Alberto; Satler, Massimo; Ruffaldi, Emanuele
2014-01-01
We describe the design of a new mobile haptic interface that employs wheels for force rendering. The interface, consisting of an omni-directional Killough type platform, provides 2DOF force feedback with different control modalities. The system autonomously performs sensor fusion for localization and force rendering. This paper explains the relevant choices concerning the functional aspects, the control design, the mechanical and electronic solution. Experimental results for force feedback characterization are reported.
Jiang, Joe-Air; Chuang, Cheng-Long; Lin, Tzu-Shiang; Chen, Chia-Pang; Hung, Chih-Hung; Wang, Jiing-Yi; Liu, Chang-Wang; Lai, Tzu-Yun
2010-01-01
In recent years, various received signal strength (RSS)-based localization estimation approaches for wireless sensor networks (WSNs) have been proposed. RSS-based localization is regarded as a low-cost solution for many location-aware applications in WSNs. In previous studies, the radiation patterns of all sensor nodes are assumed to be spherical, which is an oversimplification of the radio propagation model in practical applications. In this study, we present an RSS-based cooperative localization method that estimates unknown coordinates of sensor nodes in a network. Arrangement of two external low-cost omnidirectional dipole antennas is developed by using the distance-power gradient model. A modified robust regression is also proposed to determine the relative azimuth and distance between a sensor node and a fixed reference node. In addition, a cooperative localization scheme that incorporates estimations from multiple fixed reference nodes is presented to improve the accuracy of the localization. The proposed method is tested via computer-based analysis and field test. Experimental results demonstrate that the proposed low-cost method is a useful solution for localizing sensor nodes in unknown or changing environments.
Payá, Luis; Reinoso, Oscar; Jiménez, Luis M; Juliá, Miguel
2017-01-01
Along the past years, mobile robots have proliferated both in domestic and in industrial environments to solve some tasks such as cleaning, assistance, or material transportation. One of their advantages is the ability to operate in wide areas without the necessity of introducing changes into the existing infrastructure. Thanks to the sensors they may be equipped with and their processing systems, mobile robots constitute a versatile alternative to solve a wide range of applications. When designing the control system of a mobile robot so that it carries out a task autonomously in an unknown environment, it is expected to take decisions about its localization in the environment and about the trajectory that it has to follow in order to arrive to the target points. More concisely, the robot has to find a relatively good solution to two crucial problems: building a model of the environment, and estimating the position of the robot within this model. In this work, we propose a framework to solve these problems using only visual information. The mobile robot is equipped with a catadioptric vision sensor that provides omnidirectional images from the environment. First, the robot goes along the trajectories to include in the model and uses the visual information captured to build this model. After that, the robot is able to estimate its position and orientation with respect to the trajectory. Among the possible approaches to solve these problems, global appearance techniques are used in this work. They have emerged recently as a robust and efficient alternative compared to landmark extraction techniques. A global description method based on Radon Transform is used to design mapping and localization algorithms and a set of images captured by a mobile robot in a real environment, under realistic operation conditions, is used to test the performance of these algorithms.
A new omni-directional multi-camera system for high resolution surveillance
NASA Astrophysics Data System (ADS)
Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2014-05-01
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.
Integration of Directional Antennas in an RSS Fingerprinting-Based Indoor Localization System
Guzmán-Quirós, Raúl; Martínez-Sala, Alejandro; Gómez-Tornero, José Luis; García-Haro, Joan
2015-01-01
In this paper, the integration of directional antennas in a room-level received signal strength (RSS) fingerprinting-based indoor localization system (ILS) is studied. The sensor reader (SR), which is in charge of capturing the RSS to infer the tag position, can be attached to an omnidirectional or directional antenna. Unlike commonly-employed omnidirectional antennas, directional antennas can receive a stronger signal from the direction in which they are pointed, resulting in a different RSS distributions in space and, hence, more distinguishable fingerprints. A simulation tool and a system management software have been also developed to control the system and assist the initial antenna deployment, reducing time-consuming costs. A prototype was mounted in a real scenario, with a number of SRs with omnidirectional and directional antennas properly positioned. Different antenna configurations have been studied, evidencing a promising capability of directional antennas to enhance the performance of RSS fingerprinting-based ILS, reducing the number of required SRs and also increasing the localization success. PMID:26703620
Maximally Informative Statistics for Localization and Mapping
NASA Technical Reports Server (NTRS)
Deans, Matthew C.
2001-01-01
This paper presents an algorithm for localization and mapping for a mobile robot using monocular vision and odometry as its means of sensing. The approach uses the Variable State Dimension filtering (VSDF) framework to combine aspects of Extended Kalman filtering and nonlinear batch optimization. This paper describes two primary improvements to the VSDF. The first is to use an interpolation scheme based on Gaussian quadrature to linearize measurements rather than relying on analytic Jacobians. The second is to replace the inverse covariance matrix in the VSDF with its Cholesky factor to improve the computational complexity. Results of applying the filter to the problem of localization and mapping with omnidirectional vision are presented.
NASA Astrophysics Data System (ADS)
Park, Chul-Soon; Shrestha, Vivek Raj; Lee, Sang-Shin; Kim, Eun-Soo; Choi, Duk-Yong
2015-02-01
We present a highly efficient omnidirectional color filter that takes advantage of an Ag-TiO2-Ag nano-resonator integrated with a phase-compensating TiO2 overlay. The dielectric overlay substantially improves the angular sensitivity by appropriately compensating for the phase pertaining to the structure and suppresses unwanted optical reflection so as to elevate the transmission efficiency. The filter is thoroughly designed, and it is analyzed in terms of its reflection, optical admittance, and phase shift, thereby highlighting the origin of the omnidirectional resonance leading to angle-invariant characteristics. The polarization dependence of the filter is explored, specifically with respect to the incident angle, by performing experiments as well as by providing the relevant theoretical explanation. We could succeed in demonstrating the omnidirectional resonance for the incident angles ranging to up to 70°, over which the center wavelength is shifted by below 3.5% and the peak transmission efficiency is slightly degraded from 69%. The proposed filters incorporate a simple multi-layered structure and are expected to be utilized as tri-color pixels for applications that include image sensors and display devices. These devices are expected to allow good scalability, not requiring complex lithographic processes.
Sensor deployment on unmanned ground vehicles
NASA Astrophysics Data System (ADS)
Gerhart, Grant R.; Witus, Gary
2007-10-01
TARDEC has been developing payloads for small robots as part of its unmanned ground vehicle (UGV) development programs. These platforms typically weigh less than 100 lbs and are used for various physical security and force protection applications. This paper will address a number of technical issues including platform mobility, payload positioning, sensor configuration and operational tradeoffs. TARDEC has developed a number of robots with different mobility mechanisms including track, wheel and hybrid track/wheel running gear configurations. An extensive discussion will focus upon omni-directional vehicle (ODV) platforms with enhanced intrinsic mobility for positioning sensor payloads. This paper also discusses tradeoffs between intrinsic platform mobility and articulated arm complexity for end point positioning of modular sensor packages.
Impact localization on composite structures using time difference and MUSIC approach
NASA Astrophysics Data System (ADS)
Zhong, Yongteng; Xiang, Jiawei
2017-05-01
1-D uniform linear array (ULA) has the shortcoming of the half-plane mirror effect, which does not allow discriminating between a target placed above the array and a target placed below the array. This paper presents time difference (TD) and multiple signal classification (MUSIC) based omni-directional impact localization on a large stiffened composite structure using improved linear array, which is able to perform omni-directional 360° localization. This array contains 2M+3 PZT sensors, where 2M+1 PZT sensors are arranged as a uniform linear array, and the other two PZT sensors are placed above and below the array. Firstly, the arrival times of impact signals observed by the other two sensors are determined using the wavelet transform. Compared with each other, the direction range of impact source can be decided in general, 0°to 180° or 180°to 360°. And then, two dimensional multiple signal classification (2D-MUSIC) based spatial spectrum formula using the uniform linear array is applied for impact localization by the general direction range. When the arrival times of impact signals observed by upper PZT is equal to that of lower PZT, the direction can be located in x axis (0°or 180°). And time difference based MUSIC method is present to locate impact position. To verify the proposed approach, the proposed approach is applied to a composite structure. The localization results are in good agreement with the actual impact occurring positions.
Won, Tae-Hee; Park, Sung-Joon
2012-01-01
For decades, underwater acoustic communication has been restricted to the point-to-point long distance applications such as deep sea probes and offshore oil fields. For this reason, previous acoustic modems were typically characterized by high data rates and long working ranges at the expense of large size and high power consumption. Recently, as the need for underwater wireless sensor networks (UWSNs) has increased, the research and development of compact and low-power consuming communication devices has become the focus. From the consideration that the requisites of acoustic modems for UWSNs are low power consumption, omni-directional beam pattern, low cost and so on, in this paper, we design and implement an omni-directional underwater acoustic micro-modem satisfying these requirements. In order to execute fast digital domain signal processing and support flexible interfaces with other peripherals, an ARM Cortex-M3 is embedded in the micro-modem. Also, for the realization of small and omni-directional properties, a spherical transducer having a resonant frequency of 70 kHz and a diameter of 34 mm is utilized for the implementation. Physical layer frame format and symbol structure for efficient packet-based underwater communication systems are also investigated. The developed acoustic micro-modem is verified analytically and experimentally in indoor and outdoor environments in terms of functionality and performance. Since the modem satisfies the requirements for use in UWSNs, it could be deployed in a wide range of applications requiring underwater acoustic communication.
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
Hand-writing motion tracking with vision-inertial sensor fusion: calibration and error correction.
Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J
2014-08-25
The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model.
Curiac, Daniel-Ioan
2016-04-07
Being often deployed in remote or hostile environments, wireless sensor networks are vulnerable to various types of security attacks. A possible solution to reduce the security risks is to use directional antennas instead of omnidirectional ones or in conjunction with them. Due to their increased complexity, higher costs and larger sizes, directional antennas are not traditionally used in wireless sensor networks, but recent technology trends may support this method. This paper surveys existing state of the art approaches in the field, offering a broad perspective of the future use of directional antennas in mitigating security risks, together with new challenges and open research issues.
Hand-Writing Motion Tracking with Vision-Inertial Sensor Fusion: Calibration and Error Correction
Zhou, Shengli; Fei, Fei; Zhang, Guanglie; Liu, Yunhui; Li, Wen J.
2014-01-01
The purpose of this study was to improve the accuracy of real-time ego-motion tracking through inertial sensor and vision sensor fusion. Due to low sampling rates supported by web-based vision sensor and accumulation of errors in inertial sensors, ego-motion tracking with vision sensors is commonly afflicted by slow updating rates, while motion tracking with inertial sensor suffers from rapid deterioration in accuracy with time. This paper starts with a discussion of developed algorithms for calibrating two relative rotations of the system using only one reference image. Next, stochastic noises associated with the inertial sensor are identified using Allan Variance analysis, and modeled according to their characteristics. Finally, the proposed models are incorporated into an extended Kalman filter for inertial sensor and vision sensor fusion. Compared with results from conventional sensor fusion models, we have shown that ego-motion tracking can be greatly enhanced using the proposed error correction model. PMID:25157546
Egomotion Estimation with Optic Flow and Air Velocity Sensors
2012-01-22
RUMMELT ADAM J. RUTKOWSKI Acting Technical Advisor, RWW Program Manager This report is...method of distance and groundspeed estimation using an omnidirectional camera, but knowledge of the average scene distance is required. Flight height...varying wind and even over sloped terrain. Our method also does not require any prior knowledge of the environment or the flyer motion states. This
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Curiac, Daniel-Ioan
2016-01-01
Being often deployed in remote or hostile environments, wireless sensor networks are vulnerable to various types of security attacks. A possible solution to reduce the security risks is to use directional antennas instead of omnidirectional ones or in conjunction with them. Due to their increased complexity, higher costs and larger sizes, directional antennas are not traditionally used in wireless sensor networks, but recent technology trends may support this method. This paper surveys existing state of the art approaches in the field, offering a broad perspective of the future use of directional antennas in mitigating security risks, together with new challenges and open research issues. PMID:27070601
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
NASA Astrophysics Data System (ADS)
Han, Mengdi; Zhang, Xiao-Sheng; Sun, Xuming; Meng, Bo; Liu, Wen; Zhang, Haixia
2014-04-01
The triboelectric nanogenerator (TENG) is a promising device in energy harvesting and self-powered sensing. In this work, we demonstrate a magnetic-assisted TENG, utilizing the magnetic force for electric generation. Maximum power density of 541.1 mW/m2 is obtained at 16.67 MΩ for the triboelectric part, while the electromagnetic part can provide power density of 649.4 mW/m2 at 16 Ω. Through theoretical calculation and experimental measurement, linear relationship between the tilt angle and output voltage at large angles is observed. On this basis, a self-powered omnidirectional tilt sensor is realized by two magnetic-assisted TENGs, which can measure the magnitude and direction of the tilt angle at the same time. For visualized sensing of the tilt angle, a sensing system is established, which is portable, intuitive, and self-powered. This visualized system greatly simplifies the measure process, and promotes the development of self-powered systems.
2006-07-27
unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT The goal of this project was to develop analytical and computational tools to make vision a Viable sensor for...vision.ucla. edu July 27, 2006 Abstract The goal of this project was to develop analytical and computational tools to make vision a viable sensor for the ... sensors . We have proposed the framework of stereoscopic segmentation where multiple images of the same obejcts were jointly processed to extract geometry
NASA Technical Reports Server (NTRS)
Bedard, A. J., Jr.; Nishiyama, R. T.
1993-01-01
Instruments developed for making meteorological observations under adverse conditions on Earth can be applied to systems designed for other planetary atmospheres. Specifically, a wind sensor developed for making measurements within tornados is capable of detecting induced pressure differences proportional to wind speed. Adding strain gauges to the sensor would provide wind direction. The device can be constructed in a rugged form for measuring high wind speeds in the presence of blowing dust that would clog bearings and plug passages of conventional wind speed sensors. Sensing static pressure in the lower boundary layer required development of an omnidirectional, tilt-insensitive static pressure probe. The probe provides pressure inputs to a sensor with minimum error and is inherently weather-protected. The wind sensor and static pressure probes have been used in a variety of field programs and can be adapted for use in different planetary atmospheres.
Egomotion Estimation with Optic Flow and Air Velocity Sensors
2012-09-17
Program Manager This report is published in the interest of scientific and technical information exchange, and its publication...flight height is known. Franz et al. (2004) have developed a method of distance and groundspeed estimation using an omnidirectional camera, but knowledge ...method we have described works in both constant and varying wind and even over sloped terrain. Our method also does not require any prior knowledge of
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach
Vlaminck, Michiel; Luong, Hiep; Goeman, Werner; Philips, Wilfried
2016-01-01
In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m2. To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions. PMID:27854315
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Brian E.; Oppel III, Fred J.
2017-01-25
This package contains modules that model a visual sensor in Umbra. It is typically used to represent eyesight of characters in Umbra. This library also includes the sensor property, seeable, and an Active Denial sensor.
Guided wave and damage detection in composite laminates using different fiber optic sensors.
Li, Fucai; Murayama, Hideaki; Kageyama, Kazuro; Shirai, Takehiro
2009-01-01
Guided wave detection using different fiber optic sensors and their applications in damage detection for composite laminates were systematically investigated and compared in this paper. Two types of fiber optic sensors, namely fiber Bragg gratings (FBG) and Doppler effect-based fiber optic (FOD) sensors, were addressed and guided wave detection systems were constructed for both types. Guided waves generated by a piezoelectric transducer were propagated through a quasi-isotropic carbon fiber reinforced plastic (CFRP) laminate and acquired by these fiber optic sensors. Characteristics of these fiber optic sensors in ultrasonic guided wave detection were systematically compared. Results demonstrated that both the FBG and FOD sensors can be applied in guided wave and damage detection for the CFRP laminates. The signal-to-noise ratio (SNR) of guided wave signal captured by an FOD sensor is relatively high in comparison with that of the FBG sensor because of their different physical principles in ultrasonic detection. Further, the FOD sensor is sensitive to the damage-induced fundamental shear horizontal (SH(0)) guided wave that, however, cannot be detected by using the FBG sensor, because the FOD sensor is omnidirectional in ultrasound detection and, in contrast, the FBG sensor is severely direction dependent.
Measurements and Analysis of Reverberation and Clutter Data
2007-04-01
triplet arrays and the DRDC ar- ray with combined omnidirectional and dipole sensors. A fast shallow water reverberation model was extended to...Bistatic reverberation models are too slow for inversion, but model-data comparisons will be made using ray -based models, e.g. GSM [11], or normal-mode...July 2000, pp. 1183–1188, European Commission, Luxembourg. Meeting held at Lyon, France. [36] Weinberg, H. and Keenan, R. E. (1996), Gaussian ray
Ionic polymer-metal composite torsional sensor: physics-based modeling and experimental validation
NASA Astrophysics Data System (ADS)
Aidi Sharif, Montassar; Lei, Hong; Khalid Al-Rubaiai, Mohammed; Tan, Xiaobo
2018-07-01
Ionic polymer-metal composites (IPMCs) have intrinsic sensing and actuation properties. Typical IPMC sensors are in the shape of beams and only respond to stimuli acting along beam-bending directions. Rod or tube-shaped IPMCs have been explored as omnidirectional bending actuators or sensors. In this paper, physics-based modeling is studied for a tubular IPMC sensor under pure torsional stimulus. The Poisson–Nernst–Planck model is used to describe the fundamental physics within the IPMC, where it is hypothesized that the anion concentration is coupled to the sum of shear strains induced by the torsional stimulus. Finite element simulation is conducted to solve for the torsional sensing response, where some of the key parameters are identified based on experimental measurements using an artificial neural network. Additional experimental results suggest that the proposed model is able to capture the torsional sensing dynamics for different amplitudes and rates of the torsional stimulus.
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
2011-11-01
RX-TY-TR-2011-0096-01) develops a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica...01 summarizes the development of a novel computer vision sensor based upon the biological vision system of the common housefly , Musca domestica
Han, Mengdi; Zhang, Xiao-Sheng; Sun, Xuming; Meng, Bo; Liu, Wen; Zhang, Haixia
2014-01-01
The triboelectric nanogenerator (TENG) is a promising device in energy harvesting and self-powered sensing. In this work, we demonstrate a magnetic-assisted TENG, utilizing the magnetic force for electric generation. Maximum power density of 541.1 mW/m2 is obtained at 16.67 MΩ for the triboelectric part, while the electromagnetic part can provide power density of 649.4 mW/m2 at 16 Ω. Through theoretical calculation and experimental measurement, linear relationship between the tilt angle and output voltage at large angles is observed. On this basis, a self-powered omnidirectional tilt sensor is realized by two magnetic-assisted TENGs, which can measure the magnitude and direction of the tilt angle at the same time. For visualized sensing of the tilt angle, a sensing system is established, which is portable, intuitive, and self-powered. This visualized system greatly simplifies the measure process, and promotes the development of self-powered systems. PMID:24770490
Development of Non-contact Respiratory Monitoring System for Newborn Using a FG Vision Sensor
NASA Astrophysics Data System (ADS)
Kurami, Yoshiyuki; Itoh, Yushi; Natori, Michiya; Ohzeki, Kazuo; Aoki, Yoshimitsu
In recent years, development of neonatal care is strongly hoped, with increase of the low-birth-weight baby birth rate. Especially respiration of low-birth-weight baby is incertitude because central nerve and respiratory function is immature. Therefore, a low-birth-weight baby often causes a disease of respiration. In a NICU (Neonatal Intensive Care Unit), neonatal respiration is monitored using cardio-respiratory monitor and pulse oximeter at all times. These contact-type sensors can measure respiratory rate and SpO2 (Saturation of Peripheral Oxygen). However, because a contact-type sensor might damage the newborn's skin, it is a real burden to monitor neonatal respiration. Therefore, we developed the respiratory monitoring system for newborn using a FG (Fiber Grating) vision sensor. FG vision sensor is an active stereo vision sensor, it is possible for non-contact 3D measurement. A respiratory waveform is calculated by detecting the vertical motion of the thoracic and abdominal region with respiration. We attempted clinical experiment in the NICU, and confirmed the accuracy of the obtained respiratory waveform was high. Non-contact respiratory monitoring of newborn using a FG vision sensor enabled the minimally invasive procedure.
Implementation of Headtracking and 3D Stereo with Unity and VRPN for Computer Simulations
NASA Technical Reports Server (NTRS)
Noyes, Matthew A.
2013-01-01
This paper explores low-cost hardware and software methods to provide depth cues traditionally absent in monocular displays. The use of a VRPN server in conjunction with a Microsoft Kinect and/or Nintendo Wiimote to provide head tracking information to a Unity application, and NVIDIA 3D Vision for retinal disparity support, is discussed. Methods are suggested to implement this technology with NASA's EDGE simulation graphics package, along with potential caveats. Finally, future applications of this technology to astronaut crew training, particularly when combined with an omnidirectional treadmill for virtual locomotion and NASA's ARGOS system for reduced gravity simulation, are discussed.
Multispectral image-fused head-tracked vision system (HTVS) for driving applications
NASA Astrophysics Data System (ADS)
Reese, Colin E.; Bender, Edward J.
2001-08-01
Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.
Research on an autonomous vision-guided helicopter
NASA Technical Reports Server (NTRS)
Amidi, Omead; Mesaki, Yuji; Kanade, Takeo
1994-01-01
Integration of computer vision with on-board sensors to autonomously fly helicopters was researched. The key components developed were custom designed vision processing hardware and an indoor testbed. The custom designed hardware provided flexible integration of on-board sensors with real-time image processing resulting in a significant improvement in vision-based state estimation. The indoor testbed provided convenient calibrated experimentation in constructing real autonomous systems.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
Novel compact panomorph lens based vision system for monitoring around a vehicle
NASA Astrophysics Data System (ADS)
Thibault, Simon
2008-04-01
Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.
Computing Optic Flow with ArduEye Vision Sensor
2013-01-01
processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the
Optimal Constellation Design for Maximum Continuous Coverage of Targets Against a Space Background
2012-05-31
constellation is considered with the properties shown in Table 13. The parameter hres refers to the number of equally spaced offset planes in which cross...mean anomaly 180 ◦ M0i mean anomaly of lead satellite at epoch 0 ◦ R omni-directional sensor range 5000 km m initial polygon resolution 50 PPC hres ...a Walker Star. Idealized parameters for the Iridium constellation are shown in Table 14. The parameter hres refers to the number of equally spaced
A remote assessment system with a vision robot and wearable sensors.
Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun
2004-01-01
This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.
Moraes, Celso; Myung, Sunghee; Lee, Sangkeum; Har, Dongsoo
2017-01-10
Provision of energy to wireless sensor networks is crucial for their sustainable operation. Sensor nodes are typically equipped with batteries as their operating energy sources. However, when the sensor nodes are sited in almost inaccessible locations, replacing their batteries incurs high maintenance cost. Under such conditions, wireless charging of sensor nodes by a mobile charger with an antenna can be an efficient solution. When charging distributed sensor nodes, a directional antenna, rather than an omnidirectional antenna, is more energy-efficient because of smaller proportion of off-target radiation. In addition, for densely distributed sensor nodes, it can be more effective for some undercharged sensor nodes to harvest energy from neighboring overcharged sensor nodes than from the remote mobile charger, because this reduces the pathloss of charging signal due to smaller distances. In this paper, we propose a hybrid charging scheme that combines charging by a mobile charger with a directional antenna, and energy trading, e.g., transferring and harvesting, between neighboring sensor nodes. The proposed scheme is compared with other charging scheme. Simulations demonstrate that the hybrid charging scheme with a directional antenna achieves a significant reduction in the total charging time required for all sensor nodes to reach a target energy level.
Moraes, Celso; Myung, Sunghee; Lee, Sangkeum; Har, Dongsoo
2017-01-01
Provision of energy to wireless sensor networks is crucial for their sustainable operation. Sensor nodes are typically equipped with batteries as their operating energy sources. However, when the sensor nodes are sited in almost inaccessible locations, replacing their batteries incurs high maintenance cost. Under such conditions, wireless charging of sensor nodes by a mobile charger with an antenna can be an efficient solution. When charging distributed sensor nodes, a directional antenna, rather than an omnidirectional antenna, is more energy-efficient because of smaller proportion of off-target radiation. In addition, for densely distributed sensor nodes, it can be more effective for some undercharged sensor nodes to harvest energy from neighboring overcharged sensor nodes than from the remote mobile charger, because this reduces the pathloss of charging signal due to smaller distances. In this paper, we propose a hybrid charging scheme that combines charging by a mobile charger with a directional antenna, and energy trading, e.g., transferring and harvesting, between neighboring sensor nodes. The proposed scheme is compared with other charging scheme. Simulations demonstrate that the hybrid charging scheme with a directional antenna achieves a significant reduction in the total charging time required for all sensor nodes to reach a target energy level. PMID:28075372
Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan
2017-06-06
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable.
Hu, Qijun; He, Songsheng; Wang, Shilong; Liu, Yugang; Zhang, Zutao; He, Leping; Wang, Fubin; Cai, Qijie; Shi, Rendan; Yang, Yuan
2017-01-01
Bus Rapid Transit (BRT) has become an increasing source of concern for public transportation of modern cities. Traditional contact sensing techniques during the process of health monitoring of BRT viaducts cannot overcome the deficiency that the normal free-flow of traffic would be blocked. Advances in computer vision technology provide a new line of thought for solving this problem. In this study, a high-speed target-free vision-based sensor is proposed to measure the vibration of structures without interrupting traffic. An improved keypoints matching algorithm based on consensus-based matching and tracking (CMT) object tracking algorithm is adopted and further developed together with oriented brief (ORB) keypoints detection algorithm for practicable and effective tracking of objects. Moreover, by synthesizing the existing scaling factor calculation methods, more rational approaches to reducing errors are implemented. The performance of the vision-based sensor is evaluated through a series of laboratory tests. Experimental tests with different target types, frequencies, amplitudes and motion patterns are conducted. The performance of the method is satisfactory, which indicates that the vision sensor can extract accurate structure vibration signals by tracking either artificial or natural targets. Field tests further demonstrate that the vision sensor is both practicable and reliable. PMID:28587275
Object positioning in storages of robotized workcells using LabVIEW Vision
NASA Astrophysics Data System (ADS)
Hryniewicz, P.; Banaś, W.; Sękala, A.; Gwiazda, A.; Foit, K.; Kost, G.
2015-11-01
During the manufacturing process, each performed task is previously developed and adapted to the conditions and the possibilities of the manufacturing plant. The production process is supervised by a team of specialists because any downtime causes great loss of time and hence financial loss. Sensors used in industry for tracking and supervision various stages of a production process make it much easier to maintain it continuous. One of groups of sensors used in industrial applications are non-contact sensors. This group includes: light barriers, optical sensors, rangefinders, vision systems, and ultrasonic sensors. Through to the rapid development of electronics the vision systems were widespread as the most flexible type of non-contact sensors. These systems consist of cameras, devices for data acquisition, devices for data analysis and specialized software. Vision systems work well as sensors that control the production process itself as well as the sensors that control the product quality level. The LabVIEW program as well as the LabVIEW Vision and LabVIEW Builder represent the application that enables program the informatics system intended to process and product quality control. The paper presents elaborated application for positioning elements in a robotized workcell. Basing on geometric parameters of manipulated object or on the basis of previously developed graphical pattern it is possible to determine the position of particular manipulated elements. This application could work in an automatic mode and in real time cooperating with the robot control system. It allows making the workcell functioning more autonomous.
NASA Astrophysics Data System (ADS)
Qiu, Zhi-cheng; Wang, Xian-feng; Zhang, Xian-Min; Liu, Jin-guo
2018-07-01
A novel non-contact vibration measurement method using binocular vision sensors is proposed for piezoelectric flexible hinged plate. Decoupling methods of the bending and torsional low frequency vibration on measurement and driving control are investigated, using binocular vision sensors and piezoelectric actuators. A radial basis function neural network controller (RBFNNC) is designed to suppress both the larger and the smaller amplitude vibrations. To verify the non-contact measurement method and the designed controller, an experimental setup of the flexible hinged plate with binocular vision is constructed. Experiments on vibration measurement and control are conducted by using binocular vision sensors and the designed RBFNNC controllers, compared with the classical proportional and derivative (PD) control algorithm. The experimental measurement results demonstrate that the binocular vision sensors can detect the low-frequency bending and torsional vibration effectively. Furthermore, the designed RBF can suppress the bending vibration more quickly than the designed PD controller owing to the adjustment of the RBF control, especially for the small amplitude residual vibrations.
Enhanced computer vision with Microsoft Kinect sensor: a review.
Han, Jungong; Shao, Ling; Xu, Dong; Shotton, Jamie
2013-10-01
With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use. The complementary nature of the depth and visual information provided by the Kinect sensor opens up new opportunities to solve fundamental problems in computer vision. This paper presents a comprehensive review of recent Kinect-based computer vision algorithms and applications. The reviewed approaches are classified according to the type of vision problems that can be addressed or enhanced by means of the Kinect sensor. The covered topics include preprocessing, object tracking and recognition, human activity analysis, hand gesture analysis, and indoor 3-D mapping. For each category of methods, we outline their main algorithmic contributions and summarize their advantages/differences compared to their RGB counterparts. Finally, we give an overview of the challenges in this field and future research trends. This paper is expected to serve as a tutorial and source of references for Kinect-based computer vision researchers.
NASA Astrophysics Data System (ADS)
Helble, Tyler Adam
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.
Liu, Hong-yue; Liang, Da-kai; Han, Xiao-lin; Zeng, Jie
2013-05-10
From the angle of sensitivity of the long period fiber grating (LPFG) resonant transmission spectrum, we demonstrate the sensitivity of LPFG resonance peak amplitude changing with transverse loads. The design of a resonant peak modulation-based LPFG rebar corrosion sensor is described by combining the spectral characteristics of LPFG with the expansion state monitoring of rebar corrosion. LPFG spectrum curves corresponding with different rebar corrosion status of the environment under test are captured by the monitoring technique of LPFG transmission spectra, and the relationship between the resonance peak amplitude change and the state of rebar corrosion is obtained, that is, the variation of LPFG resonance peak amplitude increases with the intensifying of the degree of rebar corrosion. The experimental results numerically show that the sensor response has good regularity for a wide range of travel.
Kotze, Ben; Jordaan, Gerrit
2014-08-25
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed.
Kotze, Ben; Jordaan, Gerrit
2014-01-01
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed. PMID:25157548
Vision-Based Traffic Data Collection Sensor for Automotive Applications
Llorca, David F.; Sánchez, Sergio; Ocaña, Manuel; Sotelo, Miguel. A.
2010-01-01
This paper presents a complete vision sensor onboard a moving vehicle which collects the traffic data in its local area in daytime conditions. The sensor comprises a rear looking and a forward looking camera. Thus, a representative description of the traffic conditions in the local area of the host vehicle can be computed. The proposed sensor detects the number of vehicles (traffic load), their relative positions and their relative velocities in a four-stage process: lane detection, candidates selection, vehicles classification and tracking. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision sensor with the data supplied by the CAN Bus and a GPS sensor. The presented experiments are promising in terms of detection performance and accuracy in order to be validated for applications in the context of the automotive industry. PMID:22315572
Vision-based traffic data collection sensor for automotive applications.
Llorca, David F; Sánchez, Sergio; Ocaña, Manuel; Sotelo, Miguel A
2010-01-01
This paper presents a complete vision sensor onboard a moving vehicle which collects the traffic data in its local area in daytime conditions. The sensor comprises a rear looking and a forward looking camera. Thus, a representative description of the traffic conditions in the local area of the host vehicle can be computed. The proposed sensor detects the number of vehicles (traffic load), their relative positions and their relative velocities in a four-stage process: lane detection, candidates selection, vehicles classification and tracking. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision sensor with the data supplied by the CAN Bus and a GPS sensor. The presented experiments are promising in terms of detection performance and accuracy in order to be validated for applications in the context of the automotive industry.
Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review
Ali, Asraf; Sundaraj, Kenneth; Ahmad, Badlishah; Ahamed, Nizam; Islam, Anamul
2012-01-01
Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders. PMID:22938548
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
Omnidirectional light absorption of disordered nano-hole structure inspired from Papilio ulysses.
Wang, Wanlin; Zhang, Wang; Fang, Xiaotian; Huang, Yiqiao; Liu, Qinglei; Bai, Mingwen; Zhang, Di
2014-07-15
Butterflies routinely produce nanostructured surfaces with useful properties. Here, we report a disordered nano-hole structure with ridges inspired by Papilio ulysses that produce omnidirectional light absorption compared with the common ordered structure. The result shows that the omnidirectional light absorption is affected by polarization, the incident angle, and the wavelength. Using the finite-difference time-domain (FDTD) method, the stable omnidirectional light absorption is achieved in the structure inspired from the Papilio ulysses over a wide incident angle range and with various wavelengths. This explains some of the mysteries of the structure of the Papilio ulysses butterfly. These conclusions can guide the design of omnidirectional absorption materials.
NASA Astrophysics Data System (ADS)
Xifré-Pérez, E.; Marsal, L. F.; Ferré-Borrull, J.; Pallarès, J.
2007-09-01
The use of omnidirectional mirrors (an special case of distributed Bragg reflectors) as cladding for planar waveguides is proposed and analyzed. The proposed structure is an all-porous silicon multilayer consisting of a core layer inserted between two omnidirectional mirrors. The transfer matrix method is applied for the modal analysis. The influence of the parameters of the waveguide structure on the guided modes is analyzed. These parameters are the layer thickness and number of periods of the omnidirectional mirror, and the refractive index and thickness of the core layer. Finally, the confinement of the omnidirectional mirror cladding is analyzed with respect to two other different distributed Bragg reflector claddings.
Mohanraj, A. P.; Elango, A.; Reddy, Mutra Chanakya
2016-01-01
Omnidirectional robots can move in all directions without steering their wheels and it can rotate clockwise and counterclockwise with reference to their axis. In this paper, we focused only on front and back movement, to analyse the square- and triangle-structured omnidirectional robot movements. An omnidirectional mobile robot shows different performances with the different number of wheels and the omnidirectional mobile robot's chassis design. Research is going on in this field to improve the accurate movement capability of omnidirectional mobile robots. This paper presents a design of a unique device of Angle Variable Chassis (AVC) for linear movement analysis of a three-wheeled omnidirectional mobile robot (TWOMR), at various angles (θ) between the wheels. Basic mobility algorithm is developed by varying the angles between the two selected omnidirectional wheels in TWOMR. The experiment is carried out by varying the angles (θ = 30°, 45°, 60°, 90°, and 120°) between the two selected omniwheels and analysing the movement of TWOMR in forward direction and reverse direction on a smooth cement surface. Respectively, it is compared to itself for various angles (θ), to get its advantages and weaknesses. The conclusion of the paper provides effective movement of TWOMR at a particular angle (θ) and also the application of TWOMR in different situations. PMID:26981585
Mohanraj, A P; Elango, A; Reddy, Mutra Chanakya
2016-01-01
Omnidirectional robots can move in all directions without steering their wheels and it can rotate clockwise and counterclockwise with reference to their axis. In this paper, we focused only on front and back movement, to analyse the square- and triangle-structured omnidirectional robot movements. An omnidirectional mobile robot shows different performances with the different number of wheels and the omnidirectional mobile robot's chassis design. Research is going on in this field to improve the accurate movement capability of omnidirectional mobile robots. This paper presents a design of a unique device of Angle Variable Chassis (AVC) for linear movement analysis of a three-wheeled omnidirectional mobile robot (TWOMR), at various angles (θ) between the wheels. Basic mobility algorithm is developed by varying the angles between the two selected omnidirectional wheels in TWOMR. The experiment is carried out by varying the angles (θ = 30°, 45°, 60°, 90°, and 120°) between the two selected omniwheels and analysing the movement of TWOMR in forward direction and reverse direction on a smooth cement surface. Respectively, it is compared to itself for various angles (θ), to get its advantages and weaknesses. The conclusion of the paper provides effective movement of TWOMR at a particular angle (θ) and also the application of TWOMR in different situations.
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors.
Vanarse, Anup; Osseiran, Adam; Rassau, Alexander
2016-01-01
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field.
Singh, Bipin K; Pandey, Praveen C
2016-07-20
Engineering of thermally tunable terahertz photonic and omnidirectional bandgaps has been demonstrated theoretically in one-dimensional quasi-periodic photonic crystals (PCs) containing semiconductor and dielectric materials. The considered quasi-periodic structures are taken in the form of Fibonacci, Thue-Morse, and double periodic sequences. We have shown that the photonic and omnidirectional bandgaps in the quasi-periodic structures with semiconductor constituents are strongly depend on the temperature, thickness of the constituted semiconductor and dielectric material layers, and generations of the quasi-periodic sequences. It has been found that the number of photonic bandgaps increases with layer thickness and generation of the quasi-periodic sequences. Omnidirectional bandgaps in the structures have also been obtained. Results show that the bandwidths of photonic and omnidirectional bandgaps are tunable by changing the temperature and lattice parameters of the structures. The generation of quasi-periodic sequences can also change the properties of photonic and omnidirectional bandgaps remarkably. The frequency range of the photonic and omnidirectional bandgaps can be tuned by the change of temperature and layer thickness of the considered quasi-periodic structures. This work will be useful to design tunable terahertz PC devices.
A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection
D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin
1993-01-01
A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
Evolving EO-1 Sensor Web Testbed Capabilities in Pursuit of GEOSS
NASA Technical Reports Server (NTRS)
Mandi, Dan; Ly, Vuong; Frye, Stuart; Younis, Mohamed
2006-01-01
A viewgraph presentation to evolve sensor web capabilities in pursuit of capabilities to support Global Earth Observing System of Systems (GEOSS) is shown. The topics include: 1) Vision to Enable Sensor Webs with "Hot Spots"; 2) Vision Extended for Communication/Control Architecture for Missions to Mars; 3) Key Capabilities Implemented to Enable EO-1 Sensor Webs; 4) One of Three Experiments Conducted by UMBC Undergraduate Class 12-14-05 (1 - 3); 5) Closer Look at our Mini-Rovers and Simulated Mars Landscae at GSFC; 6) Beginning to Implement Experiments with Standards-Vision for Integrated Sensor Web Environment; 7) Goddard Mission Services Evolution Center (GMSEC); 8) GMSEC Component Catalog; 9) Core Flight System (CFS) and Extension for GMSEC for Flight SW; 10) Sensor Modeling Language; 11) Seamless Ground to Space Integrated Message Bus Demonstration (completed December 2005); 12) Other Experiments in Queue; 13) Acknowledgements; and 14) References.
Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas
2018-01-01
The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.
Neurovision processor for designing intelligent sensors
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1992-03-01
A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.
Detection Performance of Horizontal Linear Hydrophone Arrays in Shallow Water.
1980-12-15
random phase G gain G angle interval covariance matrix h processor vector H matrix matched filter; generalized beamformer I unity matrix 4 SACLANTCEN SR...omnidirectional sensor is h*Ph P G = - h [Eq. 47] G = h* Q h P s The following two sections evaluate a few examples of application of the OLP. Following the...At broadside the signal covariance matrix reduces to a dyadic: P s s*;therefore, the gain (e.g. Eq. 37) becomes tr(H* P H) Pn * -1 Q -1 Pn G ~OQp
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-17
...; Comment Request--Omnidirectional Citizens Band Base Station Antennas AGENCY: Consumer Product Safety... antennas. The collection of information is in regulations setting forth the Safety Standard for Omnidirectional Citizens Band Base Station Antennas (16 CFR part 1204). These regulations establish testing and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-26
... Request--Safety Standard for Omnidirectional Citizens Band Base Station Antennas AGENCY: Consumer Product... antennas. DATES: Written comments on this request for extension of approval of information collection... Citizens Band Base Station Antennas establishes performance requirements for omnidirectional citizens band...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-04
... Information Collection; Comment Request--Omnidirectional Citizens Band Base Station Antennas AGENCY: Consumer... citizens band base station antennas. The collection of information is in regulations setting forth the Safety Standard for Omnidirectional Citizens Band Base Station Antennas (16 CFR part 1204). These...
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
NASA Astrophysics Data System (ADS)
Jeong, Junho; Kim, Seungkeun; Suk, Jinyoung
2017-12-01
In order to overcome the limited range of GPS-based techniques, vision-based relative navigation methods have recently emerged as alternative approaches for a high Earth orbit (HEO) or deep space missions. Therefore, various vision-based relative navigation systems use for proximity operations between two spacecraft. For the implementation of these systems, a sensor placement problem can occur on the exterior of spacecraft due to its limited space. To deal with the sensor placement, this paper proposes a novel methodology for a vision-based relative navigation based on multiple position sensitive diode (PSD) sensors and multiple infrared beacon modules. For the proposed method, an iterated parametric study is used based on the farthest point optimization (FPO) and a constrained extended Kalman filter (CEKF). Each algorithm is applied to set the location of the sensors and to estimate relative positions and attitudes according to each combination by the PSDs and beacons. After that, scores for the sensor placement are calculated with respect to parameters: the number of the PSDs, number of the beacons, and accuracy of relative estimates. Then, the best scoring candidate is determined for the sensor placement. Moreover, the results of the iterated estimation show that the accuracy improves dramatically, as the number of the PSDs increases from one to three.
Vision systems for manned and robotic ground vehicles
NASA Astrophysics Data System (ADS)
Sanders-Reed, John N.; Koon, Phillip L.
2010-04-01
A Distributed Aperture Vision System for ground vehicles is described. An overview of the hardware including sensor pod, processor, video compression, and displays is provided. This includes a discussion of the choice between an integrated sensor pod and individually mounted sensors, open architecture design, and latency issues as well as flat panel versus head mounted displays. This technology is applied to various ground vehicle scenarios, including closed-hatch operations (operator in the vehicle), remote operator tele-operation, and supervised autonomy for multi-vehicle unmanned convoys. In addition, remote vision for automatic perimeter surveillance using autonomous vehicles and automatic detection algorithms is demonstrated.
Methods for Room Acoustic Analysis and Synthesis using a Monopole-Dipole Microphone Array
NASA Technical Reports Server (NTRS)
Abel, J. S.; Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1998-01-01
In recent work, a microphone array consisting of an omnidirectional microphone and colocated dipole microphones having orthogonally aligned dipole axes was used to examine the directional nature of a room impulse response. The arrival of significant reflections was indicated by peaks in the power of the omnidirectional microphone response; reflection direction of arrival was revealed by comparing zero-lag crosscorrelations between the omnidirectional response and the dipole responses to the omnidirectional response power to estimate arrival direction cosines with respect to the dipole axes.
Omnidirectional optical waveguide
Bora, Mihail; Bond, Tiziana C.
2016-08-02
In one embodiment, a system includes a scintillator material; a detector coupled to the scintillator material; and an omnidirectional waveguide coupled to the scintillator material, the omnidirectional waveguide comprising: a plurality of first layers comprising one or more materials having a refractive index in a first range; and a plurality of second layers comprising one or more materials having a refractive index in a second range, the second range being lower than the first range, a plurality of interfaces being defined between alternating ones of the first and second layers. In another embodiment, a method includes depositing alternating layers of a material having a relatively high refractive index and a material having a relatively low refractive index on a substrate to form an omnidirectional waveguide; and coupling the omnidirectional waveguide to at least one surface of a scintillator material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Haifeng; Nanjing Artillery Academy, Nanjing 211132; Liu Shaobin
2012-11-15
In this paper, an omnidirectional photonic band gap realized by one-dimensional ternary unmagnetized plasma photonic crystals based on a new Fibonacci quasiperiodic structure, which is composed of homogeneous unmagnetized plasma and two kinds of isotropic dielectric, is theoretically studied by the transfer matrix method. It has been shown that such an omnidirectional photonic band gap originates from Bragg gap in contrast to zero-n gap or single negative (negative permittivity or negative permeability) gap, and it is insensitive to the incidence angle and the polarization of electromagnetic wave. From the numerical results, the frequency range and central frequency of omnidirectional photonicmore » band gap can be tuned by the thickness and density of the plasma but cease to change with increasing Fibonacci order. The bandwidth of omnidirectional photonic band gap can be notably enlarged. Moreover, the plasma collision frequency has no effect on the bandwidth of omnidirectional photonic band gap. It is shown that such new structure Fibonacci quasiperiodic one-dimensional ternary plasma photonic crystals have a superior feature in the enhancement of frequency range of omnidirectional photonic band gap compared with the conventional ternary and conventional Fibonacci quasiperiodic ternary plasma photonic crystals.« less
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-12-17
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.
A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors
Vanarse, Anup; Osseiran, Adam; Rassau, Alexander
2016-01-01
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field. PMID:27065784
2011-03-09
anu.edu.au Nocturnal visual orientation in flying insects: a benchmark for the design of vision-based sensors in Micro-Aerial Vehicles Report...9 10 Technical horizon sensors Over the past few years, a remarkable proliferation of designs for micro-aerial vehicles (MAVs) has occurred...possible elevations, it may severely degrade the performance of sensors by local saturation. Therefore it is necessary to find a method whereby the effect
Vision communications based on LED array and imaging sensor
NASA Astrophysics Data System (ADS)
Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.
Navigation integrity monitoring and obstacle detection for enhanced-vision systems
NASA Astrophysics Data System (ADS)
Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter
2001-08-01
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
Pre-shaping of the Fingertip of Robot Hand Covered with Net Structure Proximity Sensor
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Suzuki, Yosuke; Hasegawa, Hiroaki; Ming, Aiguo; Ishikawa, Masatoshi; Shimojo, Makoto
To achieve skillful tasks with multi-fingered robot hands, many researchers have been working on sensor-based control of them. Vision sensors and tactile sensors are indispensable for the tasks, however, the correctness of the information from the vision sensors decreases as a robot hand approaches to a grasping object because of occlusion. This research aims to achieve seamless detection for reliable grasp by use of proximity sensors: correcting the positional error of the hand in vision-based approach, and contacting the fingertip in the posture for effective tactile sensing. In this paper, we propose a method for adjusting the posture of the fingertip to the surface of the object. The method applies “Net-Structure Proximity Sensor” on the fingertip, which can detect the postural error in the roll and pitch axes between the fingertip and the object surface. The experimental result shows that the postural error is corrected in the both axes even if the object dynamically rotates.
Evaluation of Candidate Millimeter Wave Sensors for Synthetic Vision
NASA Technical Reports Server (NTRS)
Alexander, Neal T.; Hudson, Brian H.; Echard, Jim D.
1994-01-01
The goal of the Synthetic Vision Technology Demonstration Program was to demonstrate and document the capabilities of current technologies to achieve safe aircraft landing, take off, and ground operation in very low visibility conditions. Two of the major thrusts of the program were (1) sensor evaluation in measured weather conditions on a tower overlooking an unused airfield and (2) flight testing of sensor and pilot performance via a prototype system. The presentation first briefly addresses the overall technology thrusts and goals of the program and provides a summary of MMW sensor tower-test and flight-test data collection efforts. Data analysis and calibration procedures for both the tower tests and flight tests are presented. The remainder of the presentation addresses the MMW sensor flight-test evaluation results, including the processing approach for determination of various performance metrics (e.g., contrast, sharpness, and variability). The variation of the very important contrast metric in adverse weather conditions is described. Design trade-off considerations for Synthetic Vision MMW sensors are presented.
Recent progress in millimeter-wave sensor system capabilities for enhanced (synthetic) vision
NASA Astrophysics Data System (ADS)
Hellemann, Karlheinz; Zachai, Reinhard
1999-07-01
Weather- and daylight independent operation of modern traffic systems is strongly required for an optimized and economic availability. Mainly helicopters, small aircraft and military transport aircraft operating frequently close to the ground have a need for effective and cost-effective Enhanced Vision sensors. The technical progress in sensor technology and processing speed offer today the possibility for new concepts to be realized. Derived from this background the paper reports on the improvements which are under development within the HiVision program at DaimlerChrysler Aerospace. A sensor demonstrator based on FMCW radar technology with high information update-rate and operating in the mm-wave band, has been up-graded to improve performance and fitted to fly on an experimental base. The results achieved so far demonstrate the capability to produce a weather independent enhanced vision. In addition the demonstrator has been tested on board a high- speed ferry at the Baltic sea, because fast vessels have a similar need for weather-independent operation and anti- collision measures. In the future one sensor type may serve both 'worlds' and help ease and save traffic. The described demonstrator fills up the technology gap between optical sensors (Infrared) and standard pulse radars with its specific features such as high speed scanning and weather penetration with the additional benefit of cost-effectiveness.
Islam, Mohammad Tariqul; Islam, Md. Moinul; Samsuzzaman, Md.; Faruque, Mohammad Rashed Iqbal; Misran, Norbahiah
2015-01-01
This paper presents a negative index metamaterial incorporated UWB antenna with an integration of complementary SRR (split-ring resonator) and CLS (capacitive loaded strip) unit cells for microwave imaging sensor applications. This metamaterial UWB antenna sensor consists of four unit cells along one axis, where each unit cell incorporates a complementary SRR and CLS pair. This integration enables a design layout that allows both a negative value of permittivity and a negative value of permeability simultaneous, resulting in a durable negative index to enhance the antenna sensor performance for microwave imaging sensor applications. The proposed MTM antenna sensor was designed and fabricated on an FR4 substrate having a thickness of 1.6 mm and a dielectric constant of 4.6. The electrical dimensions of this antenna sensor are 0.20 λ × 0.29 λ at a lower frequency of 3.1 GHz. This antenna sensor achieves a 131.5% bandwidth (VSWR < 2) covering the frequency bands from 3.1 GHz to more than 15 GHz with a maximum gain of 6.57 dBi. High fidelity factor and gain, smooth surface-current distribution and nearly omni-directional radiation patterns with low cross-polarization confirm that the proposed negative index UWB antenna is a promising entrant in the field of microwave imaging sensors. PMID:26007721
Islam, Mohammad Tariqul; Islam, Md Moinul; Samsuzzaman, Md; Faruque, Mohammad Rashed Iqbal; Misran, Norbahiah
2015-05-20
This paper presents a negative index metamaterial incorporated UWB antenna with an integration of complementary SRR (split-ring resonator) and CLS (capacitive loaded strip) unit cells for microwave imaging sensor applications. This metamaterial UWB antenna sensor consists of four unit cells along one axis, where each unit cell incorporates a complementary SRR and CLS pair. This integration enables a design layout that allows both a negative value of permittivity and a negative value of permeability simultaneous, resulting in a durable negative index to enhance the antenna sensor performance for microwave imaging sensor applications. The proposed MTM antenna sensor was designed and fabricated on an FR4 substrate having a thickness of 1.6 mm and a dielectric constant of 4.6. The electrical dimensions of this antenna sensor are 0.20 λ × 0.29 λ at a lower frequency of 3.1 GHz. This antenna sensor achieves a 131.5% bandwidth (VSWR < 2) covering the frequency bands from 3.1 GHz to more than 15 GHz with a maximum gain of 6.57 dBi. High fidelity factor and gain, smooth surface-current distribution and nearly omni-directional radiation patterns with low cross-polarization confirm that the proposed negative index UWB antenna is a promising entrant in the field of microwave imaging sensors.
Microspacecraft and Earth observation: Electrical field (ELF) measurement project
NASA Technical Reports Server (NTRS)
Olsen, Tanya; Elkington, Scot; Parker, Scott; Smith, Grover; Shumway, Andrew; Christensen, Craig; Parsa, Mehrdad; Larsen, Layne; Martinez, Ranae; Powell, George
1990-01-01
The Utah State University space system design project for 1989 to 1990 focuses on the design of a global electrical field sensing system to be deployed in a constellation of microspacecraft. The design includes the selection of the sensor and the design of the spacecraft, the sensor support subsystems, the launch vehicle interface structure, on board data storage and communications subsystems, and associated ground receiving stations. Optimization of satellite orbits and spacecraft attitude are critical to the overall mapping of the electrical field and, thus, are also included in the project. The spacecraft design incorporates a deployable sensor array (5 m booms) into a spinning oblate platform. Data is taken every 0.1 seconds by the electrical field sensors and stored on-board. An omni-directional antenna communicates with a ground station twice per day to down link the stored data. Wrap-around solar cells cover the exterior of the spacecraft to generate power. Nine Pegasus launches may be used to deploy fifty such satellites to orbits with inclinations greater than 45 deg. Piggyback deployment from other launch vehicles such as the DELTA 2 is also examined.
Biological Basis For Computer Vision: Some Perspectives
NASA Astrophysics Data System (ADS)
Gupta, Madan M.
1990-03-01
Using biology as a basis for the development of sensors, devices and computer vision systems is a challenge to systems and vision scientists. It is also a field of promising research for engineering applications. Biological sensory systems, such as vision, touch and hearing, sense different physical phenomena from our environment, yet they possess some common mathematical functions. These mathematical functions are cast into the neural layers which are distributed throughout our sensory regions, sensory information transmission channels and in the cortex, the centre of perception. In this paper, we are concerned with the study of the biological vision system and the emulation of some of its mathematical functions, both retinal and visual cortex, for the development of a robust computer vision system. This field of research is not only intriguing, but offers a great challenge to systems scientists in the development of functional algorithms. These functional algorithms can be generalized for further studies in such fields as signal processing, control systems and image processing. Our studies are heavily dependent on the the use of fuzzy - neural layers and generalized receptive fields. Building blocks of such neural layers and receptive fields may lead to the design of better sensors and better computer vision systems. It is hoped that these studies will lead to the development of better artificial vision systems with various applications to vision prosthesis for the blind, robotic vision, medical imaging, medical sensors, industrial automation, remote sensing, space stations and ocean exploration.
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-01-01
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318
A Reconfigurable Omnidirectional Soft Robot Based on Caterpillar Locomotion.
Zou, Jun; Lin, Yangqiao; Ji, Chen; Yang, Huayong
2018-04-01
A pneumatically powered, reconfigurable omnidirectional soft robot based on caterpillar locomotion is described. The robot is composed of nine modules arranged as a three by three matrix and the length of this matrix is 154 mm. The robot propagates a traveling wave inspired by caterpillar locomotion, and it has all three degrees of freedom on a plane (X, Y, and rotation). The speed of the robot is about 18.5 m/h (two body lengths per minute) and it can rotate at a speed of 1.63°/s. The modules have neodymium-iron-boron (NdFeB) magnets embedded and can be easily replaced or combined into other configurations. Two different configurations are presented to demonstrate the possibilities of the modular structure: (1) by removing some modules, the omnidirectional robot can be reassembled into a form that can crawl in a pipe and (2) two omnidirectional robots can crawl close to each other and be assembled automatically into a bigger omnidirectional robot. Omnidirectional motion is important for soft robots to explore unstructured environments. The modular structure gives the soft robot the ability to cope with the challenges of different environments and tasks.
Benchmarking neuromorphic vision: lessons learnt from computer vision
Tan, Cheston; Lallee, Stephane; Orchard, Garrick
2015-01-01
Neuromorphic Vision sensors have improved greatly since the first silicon retina was presented almost three decades ago. They have recently matured to the point where they are commercially available and can be operated by laymen. However, despite improved availability of sensors, there remains a lack of good datasets, while algorithms for processing spike-based visual data are still in their infancy. On the other hand, frame-based computer vision algorithms are far more mature, thanks in part to widely accepted datasets which allow direct comparison between algorithms and encourage competition. We are presented with a unique opportunity to shape the development of Neuromorphic Vision benchmarks and challenges by leveraging what has been learnt from the use of datasets in frame-based computer vision. Taking advantage of this opportunity, in this paper we review the role that benchmarks and challenges have played in the advancement of frame-based computer vision, and suggest guidelines for the creation of Neuromorphic Vision benchmarks and challenges. We also discuss the unique challenges faced when benchmarking Neuromorphic Vision algorithms, particularly when attempting to provide direct comparison with frame-based computer vision. PMID:26528120
Beamforming strategy of ULA and UCA sensor configuration in multistatic passive radar
NASA Astrophysics Data System (ADS)
Hossa, Robert
2009-06-01
A Beamforming Network (BN) concept of Uniform Linear Array (ULA) and Uniform Circular Array (UCA) dipole configuration designed to multistatic passive radar is considered in details. In the case of UCA configuration, computationally efficient procedure of beamspace transformation from UCA to virtual ULA configuration with omnidirectional coverage is utilized. If effect, the idea of the proposed solution is equivalent to the techniques of antenna array factor shaping dedicated to ULA structure. Finally, exemplary results from the computer software simulations of elaborated spatial filtering solutions to reference and surveillance channels are provided and discussed.
NASA Astrophysics Data System (ADS)
Belbachir, A. N.; Hofstätter, M.; Litzenberger, M.; Schön, P.
2009-10-01
A synchronous communication interface for neuromorphic temporal contrast vision sensors is described and evaluated in this paper. This interface has been designed for ultra high-speed synchronous arbitration of a temporal contrast image sensors pixels' data. Enabling high-precision timestamping, this system demonstrates its uniqueness for handling peak data rates and preserving the main advantage of the neuromorphic electronic systems, that is high and accurate temporal resolution. Based on a synchronous arbitration concept, the timestamping has a resolution of 100 ns. Both synchronous and (state-of-the-art) asynchronous arbiters have been implemented in a neuromorphic dual-line vision sensor chip in a standard 0.35 µm CMOS process. The performance analysis of both arbiters and the advantages of the synchronous arbitration over asynchronous arbitration in capturing high-speed objects are discussed in detail.
Data Fusion for a Vision-Radiological System for Source Tracking and Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev
2015-07-01
A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less
Line width determination using a biomimetic fly eye vision system.
Benson, John B; Wright, Cameron H G; Barrett, Steven F
2007-01-01
Developing a new vision system based on the vision of the common house fly, Musca domestica, has created many interesting design challenges. One of those problems is line width determination, which is the topic of this paper. It has been discovered that line width can be determined with a single sensor as long as either the sensor, or the object in question, has a constant, known velocity. This is an important first step for determining the width of any arbitrary object, with unknown velocity.
Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems
NASA Technical Reports Server (NTRS)
Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.
1992-01-01
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
Application of aircraft navigation sensors to enhanced vision systems
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.
1993-01-01
In this presentation, the applicability of various aircraft navigation sensors to enhanced vision system design is discussed. First, the accuracy requirements of the FAA for precision landing systems are presented, followed by the current navigation systems and their characteristics. These systems include Instrument Landing System (ILS), Microwave Landing System (MLS), Inertial Navigation, Altimetry, and Global Positioning System (GPS). Finally, the use of navigation system data to improve enhanced vision systems is discussed. These applications include radar image rectification, motion compensation, and image registration.
NASA Technical Reports Server (NTRS)
Christian, John A.; Patangan, Mogi; Hinkel, Heather; Chevray, Keiko; Brazzel, Jack
2012-01-01
The Orion Multi-Purpose Crew Vehicle is a new spacecraft being designed by NASA and Lockheed Martin for future crewed exploration missions. The Vision Navigation Sensor is a Flash LIDAR that will be the primary relative navigation sensor for this vehicle. To obtain a better understanding of this sensor's performance, the Orion relative navigation team has performed both flight tests and ground tests. This paper summarizes and compares the performance results from the STS-134 flight test, called the Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective, and the ground tests at the Space Operations Simulation Center.
Data fusion for a vision-aided radiological detection system: Calibration algorithm performance
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Henderson, Kristofer; Martin, Allan; Riley, Phillip; Koppal, Sanjeev; Enqvist, Andreas
2018-05-01
In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.
Enhanced modeling and simulation of EO/IR sensor systems
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; May, Christopher
2015-05-01
The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.
77 FR 42704 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... Vision Sensors, 12 AN/APG-78 Fire Control Radars (FCR) with Radar Electronics Unit (LONGBOW component... Target Acquisition and Designation Sight, 27 AN/AAR-11 Modernized Pilot Night Vision Sensors, 12 AN/APG... enhance the protection of key oil and gas infrastructure and platforms which are vital to U.S. and western...
76 FR 8278 - Special Conditions: Gulfstream Model GVI Airplane; Enhanced Flight Vision System
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-14
... detected by infrared sensors can be much different from that detected by natural pilot vision. On a dark... by many imaging infrared systems. On the other hand, contrasting colors in visual wavelengths may be... of the EFVS image and the level of EFVS infrared sensor performance could depend significantly on...
Vision Sensor-Based Road Detection for Field Robot Navigation
Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen
2015-01-01
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514
NASA Astrophysics Data System (ADS)
Jain, A. K.; Dorai, C.
Computer vision has emerged as a challenging and important area of research, both as an engineering and a scientific discipline. The growing importance of computer vision is evident from the fact that it was identified as one of the "Grand Challenges" and also from its prominent role in the National Information Infrastructure. While the design of a general-purpose vision system continues to be elusive machine vision systems are being used successfully in specific application elusive, machine vision systems are being used successfully in specific application domains. Building a practical vision system requires a careful selection of appropriate sensors, extraction and integration of information from available cues in the sensed data, and evaluation of system robustness and performance. The authors discuss and demonstrate advantages of (1) multi-sensor fusion, (2) combination of features and classifiers, (3) integration of visual modules, and (IV) admissibility and goal-directed evaluation of vision algorithms. The requirements of several prominent real world applications such as biometry, document image analysis, image and video database retrieval, and automatic object model construction offer exciting problems and new opportunities to design and evaluate vision algorithms.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision
Botella, Guillermo; Martín H., José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms. PMID:22164069
FPGA-based multimodal embedded sensor system integrating low- and mid-level vision.
Botella, Guillermo; Martín H, José Antonio; Santos, Matilde; Meyer-Baese, Uwe
2011-01-01
Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms.
Data Fusion for a Vision-Radiological System: a Statistical Calibration Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev; Riley, Phillip
2015-07-01
Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development of calibration algorithms for characterizing the fused sensor system as a single entity. There is an apparent need for correcting for a scene deviation from the basic inverse distance-squared law governing the detection rates even when evaluating system calibration algorithms. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked, to which the time-dependent radiological datamore » can be incorporated by means of data fusion of the two sensors' output data. (authors)« less
Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System
García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel
2012-01-01
This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704
Complete vision-based traffic sign recognition supported by an I2V communication system.
García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel
2012-01-01
This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.
Spherical harmonic analysis of the sound radiation from omnidirectional loudspeaker arrays
NASA Astrophysics Data System (ADS)
Pasqual, A. M.
2014-09-01
Omnidirectional sound sources are widely used in room acoustics. These devices are made up of loudspeakers mounted on a spherical or polyhedral cabinet, where the dodecahedral shape prevails. Although such electroacoustic sources have been made readily available to acousticians by many manufacturers, an in-depth investigation of their vibroacoustic behavior has not been provided yet. In order to fulfill this lack, this paper presents a theoretical study of the sound radiation from omnidirectional loudspeaker arrays, which is carried out by using a mathematical model based on the spherical harmonic analysis. Eight different loudspeaker arrangements on the sphere are considered: the well-known five Platonic solid layouts and three extremal system layouts. The latter possess useful properties for spherical loudspeaker arrays used as directivity controlled sound sources, so that these layouts are included here in order to investigate whether or not they could be of interest as omnidirectional sources as well. It is shown through a comparative analysis that the dodecahedral array leads to the lowest error in producing an omnidirectional sound field and to the highest acoustic power, which corroborates the prevalence of such a layout. In addition, if a source with less than 12 loudspeakers is required, it is shown that tetrahedra or hexahedra can be used alternatively, whereas the extremal system layouts are not interesting choices for omnidirectional loudspeaker arrays.
Square tracking sensor for autonomous helicopter hover stabilization
NASA Astrophysics Data System (ADS)
Oertel, Carl-Henrik
1995-06-01
Sensors for synthetic vision are needed to extend the mission profiles of helicopters. A special task for various applications is the autonomous position hold of a helicopter above a ground fixed or moving target. As a proof of concept for a general synthetic vision solution a restricted machine vision system, which is capable of locating and tracking a special target, was developed by the Institute of Flight Mechanics of Deutsche Forschungsanstalt fur Luft- und Raumfahrt e.V. (i.e., German Aerospace Research Establishment). This sensor, which is specialized to detect and track a square, was integrated in the fly-by-wire helicopter ATTHeS (i.e., Advanced Technology Testing Helicopter System). An existing model following controller for the forward flight condition was adapted for the hover and low speed requirements of the flight vehicle. The special target, a black square with a length of one meter, was mounted on top of a car. Flight tests demonstrated the automatic stabilization of the helicopter above the moving car by synthetic vision.
Omnidirectional structured light in a flexible configuration.
Paniagua, Carmen; Puig, Luis; Guerrero, José J
2013-10-14
Structured light is a perception method that allows us to obtain 3D information from images of the scene by projecting synthetic features with a light emitter. Traditionally, this method considers a rigid configuration, where the position and orientation of the light emitter with respect to the camera are known and calibrated beforehand. In this paper we propose a new omnidirectional structured light system in flexible configuration, which overcomes the rigidness of the traditional structured light systems. We propose the use of an omnidirectional camera combined with a conic pattern light emitter. Since the light emitter is visible in the omnidirectional image, the computation of its location is possible. With this information and the projected conic in the omnidirectional image, we are able to compute the conic reconstruction, i.e., the 3D information of the conic in the space. This reconstruction considers the recovery of the depth and orientation of the scene surface where the conic pattern is projected. One application of our proposed structured light system in flexible configuration consists of a wearable omnicamera with a low-cost laser in hand for visual impaired personal assistance.
Automatic panoramic thermal integrated sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail A.; Tsui, Eddy K.; Gutin, Olga N.
2005-05-01
Historically, the US Army has recognized the advantages of panoramic imagers with high image resolution: increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The novel ViperViewTM high-resolution panoramic thermal imager is the heart of the Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) in support of the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to improve situational awareness (SA) in many defense and offensive operations, as well as serve as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The ViperView is as an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS sensor suite include ancillary sensors, advanced power management, and wakeup capability. This paper describes the development status of the APTIS system.
Near real-time, on-the-move software PED using VPEF
NASA Astrophysics Data System (ADS)
Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane
2015-05-01
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
NASA Astrophysics Data System (ADS)
Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo
An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.
Performance analysis of panoramic infrared systems
NASA Astrophysics Data System (ADS)
Furxhi, Orges; Driggers, Ronald G.; Holst, Gerald; Krapels, Keith
2014-05-01
Panoramic imagers are becoming more commonplace in the visible part of the spectrum. These imagers are often used in the real estate market, extreme sports, teleconferencing, and security applications. Infrared panoramic imagers, on the other hand, are not as common and only a few have been demonstrated. A panoramic image can be formed in several ways, using pan and stitch, distributed aperture, or omnidirectional optics. When omnidirectional optics are used, the detected image is a warped view of the world that is mapped on the focal plane array in a donut shape. The final image on the display is the mapping of the omnidirectional donut shape image back to the panoramic world view. In this paper we analyze the performance of uncooled thermal panoramic imagers that use omnidirectional optics, focusing on range performance.
2018-01-01
Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. PMID:29351267
Autonomous vision networking: miniature wireless sensor networks with imaging technology
NASA Astrophysics Data System (ADS)
Messinger, Gioia; Goldberg, Giora
2006-09-01
The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.
Advanced integrated enhanced vision systems
NASA Astrophysics Data System (ADS)
Kerr, J. R.; Luk, Chiu H.; Hammerstrom, Dan; Pavel, Misha
2003-09-01
In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.
An embedded vision system for an unmanned four-rotor helicopter
NASA Astrophysics Data System (ADS)
Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James
2006-10-01
In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.
A new family of omnidirectional and holonomic wheeled platforms for mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, F.G.; Killough, S.M.
1994-08-01
This paper presents the concepts for a new family of holonomic wheeled platforms that feature full omnidirectionality with simultaneous and independently controlled rotational and translational motion capabilities. The authors first present the orthogonal-wheels'' concept and the two major wheel assemblies on which these platforms are based. The authors then describe how a combination of these assemblies with appropriate control can be used to generate an omnidirectional capability for mobile robot platforms. Several alternative designs are considered, and their respective characteristics with respect to rotational and translational motion control are discussed. The design and control of a prototype platform developed tomore » test and demonstrate the proposed concepts is then described, and experimental results illustrating the full omnidirectionality of the platforms with decoupled rotational and translational degrees of freedom are presented.« less
Design of verification platform for wireless vision sensor networks
NASA Astrophysics Data System (ADS)
Ye, Juanjuan; Shang, Fei; Yu, Chuang
2017-08-01
At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.
Landmark navigation and autonomous landing approach with obstacle detection for aircraft
NASA Astrophysics Data System (ADS)
Fuerst, Simon; Werner, Stefan; Dickmanns, Dirk; Dickmanns, Ernst D.
1997-06-01
A machine perception system for aircraft and helicopters using multiple sensor data for state estimation is presented. By combining conventional aircraft sensor like gyros, accelerometers, artificial horizon, aerodynamic measuring devices and GPS with vision data taken by conventional CCD-cameras mounted on a pan and tilt platform, the position of the craft can be determined as well as the relative position to runways and natural landmarks. The vision data of natural landmarks are used to improve position estimates during autonomous missions. A built-in landmark management module decides which landmark should be focused on by the vision system, depending on the distance to the landmark and the aspect conditions. More complex landmarks like runways are modeled with different levels of detail that are activated dependent on range. A supervisor process compares vision data and GPS data to detect mistracking of the vision system e.g. due to poor visibility and tries to reinitialize the vision system or to set focus on another landmark available. During landing approach obstacles like trucks and airplanes can be detected on the runway. The system has been tested in real-time within a hardware-in-the-loop simulation. Simulated aircraft measurements corrupted by noise and other characteristic sensor errors have been fed into the machine perception system; the image processing module for relative state estimation was driven by computer generated imagery. Results from real-time simulation runs are given.
Close-Range Tracking of Underwater Vehicles Using Light Beacons
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Istenič, Klemen; Ribas, David
2016-01-01
This paper presents a new tracking system for autonomous underwater vehicles (AUVs) navigating in a close formation, based on computer vision and the use of active light markers. While acoustic localization can be very effective from medium to long distances, it is not so advantageous in short distances when the safety of the vehicles requires higher accuracy and update rates. The proposed system allows the estimation of the pose of a target vehicle at short ranges, with high accuracy and execution speed. To extend the field of view, an omnidirectional camera is used. This camera provides a full coverage of the lower hemisphere and enables the concurrent tracking of multiple vehicles in different positions. The system was evaluated in real sea conditions by tracking vehicles in mapping missions, where it demonstrated robust operation during extended periods of time. PMID:27023547
Close-Range Tracking of Underwater Vehicles Using Light Beacons.
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Istenič, Klemen; Ribas, David
2016-03-25
This paper presents a new tracking system for autonomous underwater vehicles (AUVs) navigating in a close formation, based on computer vision and the use of active light markers. While acoustic localization can be very effective from medium to long distances, it is not so advantageous in short distances when the safety of the vehicles requires higher accuracy and update rates. The proposed system allows the estimation of the pose of a target vehicle at short ranges, with high accuracy and execution speed. To extend the field of view, an omnidirectional camera is used. This camera provides a full coverage of the lower hemisphere and enables the concurrent tracking of multiple vehicles in different positions. The system was evaluated in real sea conditions by tracking vehicles in mapping missions, where it demonstrated robust operation during extended periods of time.
Rolling friction and energy dissipation in a spinning disc
Ma, Daolin; Liu, Caishan; Zhao, Zhen; Zhang, Hongjian
2014-01-01
This paper presents the results of both experimental and theoretical investigations for the dynamics of a steel disc spinning on a horizontal rough surface. With a pair of high-speed cameras, a stereoscopic vision method is adopted to perform omnidirectional measurements for the temporal evolution of the disc's motion. The experiment data allow us to detail the dynamics of the disc, and consequently to quantify its energy. From our experimental observations, it is confirmed that rolling friction is a primary factor responsible for the dissipation of the energy. Furthermore, a mathematical model, in which the rolling friction is characterized by a resistance torque proportional to the square of precession rate, is also proposed. By employing the model, we perform qualitative analysis and numerical simulations. Both of them provide results that precisely agree with our experimental findings. PMID:25197246
A Solar Position Sensor Based on Image Vision.
Ruelas, Adolfo; Velázquez, Nicolás; Villa-Angulo, Carlos; Acuña, Alexis; Rosales, Pedro; Suastegui, José
2017-07-29
Solar collector technologies operate with better performance when the Sun beam direction is normal to the capturing surface, and for that to happen despite the relative movement of the Sun, solar tracking systems are used, therefore, there are rules and standards that need minimum accuracy for these tracking systems to be used in solar collectors' evaluation. Obtaining accuracy is not an easy job, hence in this document the design, construction and characterization of a sensor based on a visual system that finds the relative azimuth error and height of the solar surface of interest, is presented. With these characteristics, the sensor can be used as a reference in control systems and their evaluation. The proposed sensor is based on a microcontroller with a real-time clock, inertial measurement sensors, geolocation and a vision sensor, that obtains the angle of incidence from the sunrays' direction as well as the tilt and sensor position. The sensor's characterization proved how a measurement of a focus error or a Sun position can be made, with an accuracy of 0.0426° and an uncertainty of 0.986%, which can be modified to reach an accuracy under 0.01°. The validation of this sensor was determined showing the focus error on one of the best commercial solar tracking systems, a Kipp & Zonen SOLYS 2. To conclude, the solar tracking sensor based on a vision system meets the Sun detection requirements and components that meet the accuracy conditions to be used in solar tracking systems and their evaluation or, as a tracking and orientation tool, on photovoltaic installations and solar collectors.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
Multi-Sensor Person Following in Low-Visibility Scenarios
Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier
2010-01-01
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506
Multi-sensor person following in low-visibility scenarios.
Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier
2010-01-01
Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.
A laser-based vision system for weld quality inspection.
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved.
A Laser-Based Vision System for Weld Quality Inspection
Huang, Wei; Kovacevic, Radovan
2011-01-01
Welding is a very complex process in which the final weld quality can be affected by many process parameters. In order to inspect the weld quality and detect the presence of various weld defects, different methods and systems are studied and developed. In this paper, a laser-based vision system is developed for non-destructive weld quality inspection. The vision sensor is designed based on the principle of laser triangulation. By processing the images acquired from the vision sensor, the geometrical features of the weld can be obtained. Through the visual analysis of the acquired 3D profiles of the weld, the presences as well as the positions and sizes of the weld defects can be accurately identified and therefore, the non-destructive weld quality inspection can be achieved. PMID:22344308
Present and future of vision systems technologies in commercial flight operations
NASA Astrophysics Data System (ADS)
Ward, Jim
2016-05-01
The development of systems to enable pilots of all types of aircraft to see through fog, clouds, and sandstorms and land in low visibility has been widely discussed and researched across aviation. For military applications, the goal has been to operate in a Degraded Visual Environment (DVE), using sensors to enable flight crews to see and operate without concern to weather that limits human visibility. These military DVE goals are mainly oriented to the off-field landing environment. For commercial aviation, the Federal Aviation Agency (FAA) implemented operational regulations in 2004 that allow the flight crew to see the runway environment using an Enhanced Flight Vision Systems (EFVS) and continue the approach below the normal landing decision height. The FAA is expanding the current use and economic benefit of EFVS technology and will soon permit landing without any natural vision using real-time weather-penetrating sensors. The operational goals of both of these efforts, DVE and EFVS, have been the stimulus for development of new sensors and vision displays to create the modern flight deck.
Vision-Based SLAM System for Unmanned Aerial Vehicles
Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni
2016-01-01
The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131
NASA Astrophysics Data System (ADS)
Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu
2003-01-01
This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.
Realization of an omnidirectional source of sound using parametric loudspeakers.
Sayin, Umut; Artís, Pere; Guasch, Oriol
2013-09-01
Parametric loudspeakers are often used in beam forming applications where a high directivity is required. Withal, in this paper it is proposed to use such devices to build an omnidirectional source of sound. An initial prototype, the omnidirectional parametric loudspeaker (OPL), consisting of a sphere with hundreds of ultrasonic transducers placed on it has been constructed. The OPL emits audible sound thanks to the parametric acoustic array phenomenon, and the close proximity and the large number of transducers results in the generation of a highly omnidirectional sound field. Comparisons with conventional dodecahedron loudspeakers have been made in terms of directivity, frequency response, and in applications such as the generation of diffuse acoustic fields in reverberant chambers. The OPL prototype has performed better than the conventional loudspeaker especially for frequencies higher than 500 Hz, its main drawback being the difficulty to generate intense pressure levels at low frequencies.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
ERIC Educational Resources Information Center
Chen, Kan; Stafford, Frank P.
A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…
VLSI chips for vision-based vehicle guidance
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1994-02-01
Sensor-based vehicle guidance systems are gathering rapidly increasing interest because of their potential for increasing safety, convenience, environmental friendliness, and traffic efficiency. Examples of applications include intelligent cruise control, lane following, collision warning, and collision avoidance. This paper reviews the research trends in vision-based vehicle guidance with an emphasis on VLSI chip implementations of the vision systems. As an example of VLSI chips for vision-based vehicle guidance, a stereo vision system is described in detail.
A Solar Position Sensor Based on Image Vision
Ruelas, Adolfo; Velázquez, Nicolás; Villa-Angulo, Carlos; Rosales, Pedro; Suastegui, José
2017-01-01
Solar collector technologies operate with better performance when the Sun beam direction is normal to the capturing surface, and for that to happen despite the relative movement of the Sun, solar tracking systems are used, therefore, there are rules and standards that need minimum accuracy for these tracking systems to be used in solar collectors’ evaluation. Obtaining accuracy is not an easy job, hence in this document the design, construction and characterization of a sensor based on a visual system that finds the relative azimuth error and height of the solar surface of interest, is presented. With these characteristics, the sensor can be used as a reference in control systems and their evaluation. The proposed sensor is based on a microcontroller with a real-time clock, inertial measurement sensors, geolocation and a vision sensor, that obtains the angle of incidence from the sunrays’ direction as well as the tilt and sensor position. The sensor’s characterization proved how a measurement of a focus error or a Sun position can be made, with an accuracy of 0.0426° and an uncertainty of 0.986%, which can be modified to reach an accuracy under 0.01°. The validation of this sensor was determined showing the focus error on one of the best commercial solar tracking systems, a Kipp & Zonen SOLYS 2. To conclude, the solar tracking sensor based on a vision system meets the Sun detection requirements and components that meet the accuracy conditions to be used in solar tracking systems and their evaluation or, as a tracking and orientation tool, on photovoltaic installations and solar collectors. PMID:28758935
NASA Astrophysics Data System (ADS)
de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.
2011-05-01
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.
Pre-Capture Privacy for Small Vision Sensors.
Pittaluga, Francesco; Koppal, Sanjeev Jagannatha
2017-11-01
The next wave of micro and nano devices will create a world with trillions of small networked cameras. This will lead to increased concerns about privacy and security. Most privacy preserving algorithms for computer vision are applied after image/video data has been captured. We propose to use privacy preserving optics that filter or block sensitive information directly from the incident light-field before sensor measurements are made, adding a new layer of privacy. In addition to balancing the privacy and utility of the captured data, we address trade-offs unique to miniature vision sensors, such as achieving high-quality field-of-view and resolution within the constraints of mass and volume. Our privacy preserving optics enable applications such as depth sensing, full-body motion tracking, people counting, blob detection and privacy preserving face recognition. While we demonstrate applications on macro-scale devices (smartphones, webcams, etc.) our theory has impact for smaller devices.
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Design of a laser system for instantaneous location of a longwall shearer
NASA Technical Reports Server (NTRS)
Stein, R.
1981-01-01
Calculations and measurements for the design of a laser system for instantaneous location of a longwall shearer were made. The designs determine shearer location to approximately one foot. The roll, pitch, and yaw angles of the shearer track are determined to approximately two degrees. The first technique uses the water target system. A single silicon sensor system and three gallium arsenide laser beams are used in this technique. The second technique is based on an arrangement similar to that employed in aircraft omnidirectional position finding. The angle between two points is determined by combining information in an onmidirectional flash with a scanned, narrow beam beacon. It is concluded that this approach maximizes the signal levels.
Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.
Llorca, David F.; Sotelo, Miguel A.; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M.
2010-01-01
This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance. PMID:22319323
Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades.
Orchard, Garrick; Jayawant, Ajinkya; Cohen, Gregory K; Thakor, Nitish
2015-01-01
Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labeling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches.
Sensor Needs for Control and Health Management of Intelligent Aircraft Engines
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Gang, Sanjay; Hunter, Gary W.; Guo, Ten-Huei; Semega, Kenneth J.
2004-01-01
NASA and the U.S. Department of Defense are conducting programs which support the future vision of "intelligent" aircraft engines for enhancing the affordability, performance, operability, safety, and reliability of aircraft propulsion systems. Intelligent engines will have advanced control and health management capabilities enabling these engines to be self-diagnostic, self-prognostic, and adaptive to optimize performance based upon the current condition of the engine or the current mission of the vehicle. Sensors are a critical technology necessary to enable the intelligent engine vision as they are relied upon to accurately collect the data required for engine control and health management. This paper reviews the anticipated sensor requirements to support the future vision of intelligent engines from a control and health management perspective. Propulsion control and health management technologies are discussed in the broad areas of active component controls, propulsion health management and distributed controls. In each of these three areas individual technologies will be described, input parameters necessary for control feedback or health management will be discussed, and sensor performance specifications for measuring these parameters will be summarized.
NASA Astrophysics Data System (ADS)
Tramutola, A.; Paltro, D.; Cabalo Perucha, M. P.; Paar, G.; Steiner, J.; Barrio, A. M.
2015-09-01
Vision Based Navigation (VBNAV) has been identified as a valid technology to support space exploration because it can improve autonomy and safety of space missions. Several mission scenarios can benefit from the VBNAV: Rendezvous & Docking, Fly-Bys, Interplanetary cruise, Entry Descent and Landing (EDL) and Planetary Surface exploration. For some of them VBNAV can improve the accuracy in state estimation as additional relative navigation sensor or as absolute navigation sensor. For some others, like surface mobility and terrain exploration for path identification and planning, VBNAV is mandatory. This paper presents the general avionic architecture of a Vision Based System as defined in the frame of the ESA R&T study “Multi-purpose Vision-based Navigation System Engineering Model - part 1 (VisNav-EM-1)” with special focus on the surface mobility application.
Robotic vision. [process control applications
NASA Technical Reports Server (NTRS)
Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.
1979-01-01
Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.
Avendaño, Carlos G; Palomares, Laura O
2018-04-20
We consider the propagation of electromagnetic waves throughout a nanocomposite structurally chiral medium consisting of metallic nanoballs randomly dispersed in a structurally chiral material whose dielectric properties can be represented by a resonant effective uniaxial tensor. It is found that an omnidirectional narrow pass band and two omnidirectional narrow band gaps are created in the blue optical spectrum for right and left circularly polarized light, as well as narrow reflection bands for right circularly polarized light that can be controlled by varying the light incidence angle and the filling fraction of metallic inclusions.
Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.
Kim, Hwi; Hahn, Joonku; Lee, Byoungho
2009-04-13
Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.
An evaluation of differences due to changing source directivity in room acoustic computer modeling
NASA Astrophysics Data System (ADS)
Vigeant, Michelle C.; Wang, Lily M.
2004-05-01
This project examines the effects of changing source directivity in room acoustic computer models on objective parameters and subjective perception. Acoustic parameters and auralizations calculated from omnidirectional versus directional sources were compared. Three realistic directional sources were used, measured in a limited number of octave bands from a piano, singing voice, and violin. A highly directional source that beams only within a sixteenth-tant of a sphere was also tested. Objectively, there were differences of 5% or more in reverberation time (RT) between the realistic directional and omnidirectional sources. Between the beaming directional and omnidirectional sources, differences in clarity were close to the just-noticeable-difference (jnd) criterion of 1 dB. Subjectively, participants had great difficulty distinguishing between the realistic and omnidirectional sources; very few could discern the differences in RTs. However, a larger percentage (32% vs 20%) could differentiate between the beaming and omnidirectional sources, as well as the respective differences in clarity. Further studies of the objective results from different beaming sources have been pursued. The direction of the beaming source in the room is changed, as well as the beamwidth. The objective results are analyzed to determine if differences fall within the jnd of sound-pressure level, RT, and clarity.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review.
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F
2016-03-05
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.
Design And Implementation Of Integrated Vision-Based Robotic Workcells
NASA Astrophysics Data System (ADS)
Chen, Michael J.
1985-01-01
Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.
Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review
Pérez, Luis; Rodríguez, Íñigo; Rodríguez, Nuria; Usamentiaga, Rubén; García, Daniel F.
2016-01-01
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works. PMID:26959030
Code of Federal Regulations, 2013 CFR
2013-01-01
... capabilities in one or more directions than can omnidirectionals, directionals are generally more expensive and... recent history of decreasing sales, may cause a number of manufacturers, including one or two of the... technical approaches to reducing or eliminating unreasonable risks of injury associated with omnidirectional...
Investigation of human-robot interface performance in household environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.
2016-05-01
Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-05
...), imaging sensor(s), and avionics interfaces that display the sensor imagery on the HUD and overlay it with... that display the sensor imagery, with or without other flight information, on a head-down display. To... infrared sensors can be much different from that detected by natural pilot vision. On a dark night, thermal...
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
On the Use of a Low-Cost Thermal Sensor to Improve Kinect People Detection in a Mobile Robot
Susperregi, Loreto; Sierra, Basilio; Castrillón, Modesto; Lorenzo, Javier; Martínez-Otzeta, Jose María; Lazkano, Elena
2013-01-01
Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform. The set of sensors is set up combining the rich data source offered by a Kinect sensor, which provides vision and depth at low cost, and a thermopile array sensor. Experimental results carried out with a mobile platform in a manufacturing shop floor and in a science museum have shown that the false positive rate achieved using any single cue is drastically reduced. The performance of our algorithm improves other well-known approaches, such as C4 and histogram of oriented gradients (HOG). PMID:24172285
Code of Federal Regulations, 2012 CFR
2012-01-01
... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Requirements. 1204.3 Section 1204.3... STANDARD FOR OMNIDIRECTIONAL CITIZENS BAND BASE STATION ANTENNAS The Standard § 1204.3 Requirements. All omnidirectional CB base station antennas are required to comply with the following requirements. (a) Field joints...
Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones
Chen, Jing; Cao, Ruochen; Wang, Yongtian
2015-01-01
Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters. PMID:26690439
Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones.
Chen, Jing; Cao, Ruochen; Wang, Yongtian
2015-12-10
Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters.
Mid-sized omnidirectional robot with hydraulic drive and steering
NASA Astrophysics Data System (ADS)
Wood, Carl G.; Perry, Trent; Cook, Douglas; Maxfield, Russell; Davidson, Morgan E.
2003-09-01
Through funding from the US Army-Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program, Utah State University's (USU) Center for Self-Organizing and Intelligent Systems (CSOIS) has developed the T-series of omni-directional robots based on the USU omni-directional vehicle (ODV) technology. The ODV provides independent computer control of steering and drive in a single wheel assembly. By putting multiple omni-directional (OD) wheels on a chassis, a vehicle is capable of uncoupled translational and rotational motion. Previous robots in the series, the T1, T2, T3, ODIS, ODIS-T, and ODIS-S have all used OD wheels based on electric motors. The T4 weighs approximately 1400 lbs and features a 4-wheel drive wheel configuration. Each wheel assembly consists of a hydraulic drive motor and a hydraulic steering motor. A gasoline engine is used to power both the hydraulic and electrical systems. The paper presents an overview of the mechanical design of the vehicle as well as potential uses of this technology in fielded systems.
Development of an omni-directional shear horizontal mode magnetostrictive patch transducer
NASA Astrophysics Data System (ADS)
Liu, Zenghua; Hu, Yanan; Xie, Muwen; Fan, Junwei; He, Cunfu; Wu, Bin
2018-04-01
The fundamental shear horizontal wave, SH0 mode, has great potential in defect detection and on-line monitoring with large scale and high efficiency in plate-like structures because of its non-dispersive characteristics. Aiming at consistently exciting single SH0 mode in plate-like structures, an omni-directional shear horizontal mode magnetostrictive patch transducer (OSHM-MPT) is developed on the basis of magnetostrictive effect. It consists of four fan-shaped array elements and corresponding plane solenoid array (PSA) coils, four fan-shaped permanent magnets and a circular nickel patch. The experimental results verify that the developed transducer can effectively produce the single SH0 mode in an aluminum plate. The frequency response characteristics of this developed transducer are tested. The results demonstrate that the proposed OSHM-MPT has a center frequency of 300kHz related to the distance between adjacent arc-shaped steps of the PSA coils. Furthermore, omni-directivity of this developed transducer is tested. The results demonstrate that the developed transducer has a high omnidirectional consistency.
Sensor fusion to enable next generation low cost Night Vision systems
NASA Astrophysics Data System (ADS)
Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.
2010-04-01
The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.
Mobile camera-space manipulation
NASA Technical Reports Server (NTRS)
Seelinger, Michael J. (Inventor); Yoder, John-David S. (Inventor); Skaar, Steven B. (Inventor)
2001-01-01
The invention is a method of using computer vision to control systems consisting of a combination of holonomic and nonholonomic degrees of freedom such as a wheeled rover equipped with a robotic arm, a forklift, and earth-moving equipment such as a backhoe or a front-loader. Using vision sensors mounted on the mobile system and the manipulator, the system establishes a relationship between the internal joint configuration of the holonomic degrees of freedom of the manipulator and the appearance of features on the manipulator in the reference frames of the vision sensors. Then, the system, perhaps with the assistance of an operator, identifies the locations of the target object in the reference frames of the vision sensors. Using this target information, along with the relationship described above, the system determines a suitable trajectory for the nonholonomic degrees of freedom of the base to follow towards the target object. The system also determines a suitable pose or series of poses for the holonomic degrees of freedom of the manipulator. With additional visual samples, the system automatically updates the trajectory and final pose of the manipulator so as to allow for greater precision in the overall final position of the system.
Fixation light hue bias revisited: implications for using adaptive optics to study color vision.
Hofer, H J; Blaschke, J; Patolia, J; Koenig, D E
2012-03-01
Current vision science adaptive optics systems use near infrared wavefront sensor 'beacons' that appear as red spots in the visual field. Colored fixation targets are known to influence the perceived color of macroscopic visual stimuli (Jameson, D., & Hurvich, L. M. (1967). Fixation-light bias: An unwanted by-product of fixation control. Vision Research, 7, 805-809.), suggesting that the wavefront sensor beacon may also influence perceived color for stimuli displayed with adaptive optics. Despite its importance for proper interpretation of adaptive optics experiments on the fine scale interaction of the retinal mosaic and spatial and color vision, this potential bias has not yet been quantified or addressed. Here we measure the impact of the wavefront sensor beacon on color appearance for dim, monochromatic point sources in five subjects. The presence of the beacon altered color reports both when used as a fixation target as well as when displaced in the visual field with a chromatically neutral fixation target. This influence must be taken into account when interpreting previous experiments and new methods of adaptive correction should be used in future experiments using adaptive optics to study color. Copyright © 2012 Elsevier Ltd. All rights reserved.
Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando
2009-01-01
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jena, S., E-mail: shuvendujena9@gmail.com; Tokas, R. B.; Sarkar, P.
2015-06-24
The multilayer structure of TiO{sub 2}/SiO{sub 2} (11 layers) as one dimensional photonic crystal (1D PC) has been designed and then fabricated by using asymmetric bipolar pulse DC magnetron sputtering technique for omnidirectional photonic band gap. The experimentally measured photonic band gap (PBG) in the visible region is well matched with the theoretically calculated band structure (ω vs. k) diagram. The experimentally measured omnidirectional reflection band of 44 nm over the incident angle range of 0°-70° is found almost matching within the theoretically calculated band.
On computer vision in wireless sensor networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Ko, Teresa H.
Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an imagemore » capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.« less
Image Processing Occupancy Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... OMNIDIRECTIONAL CITIZENS BAND BASE STATION ANTENNAS Certification § 1204.11 General. Section 14(a) of the Consumer... products comply with the Safety Standard for Omnidirectional CB base Station Antennas (16 CFR part 1204... commerce to issue a certificate of compliance with the applicable standard and to base that certificate...
Improving CAR Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Improving Car Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Visual tracking strategies for intelligent vehicle highway systems
NASA Astrophysics Data System (ADS)
Smith, Christopher E.; Papanikolopoulos, Nikolaos P.; Brandt, Scott A.; Richards, Charles
1995-01-01
The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved. These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations. Of the sensors available, vision sensors provide information that is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system. In this paper we present robust techniques for intelligent vehicle-highway applications where computer vision plays a crucial role. In particular, we demonstrate that the controlled active vision framework can be utilized to provide a visual sensing modality to a traffic advisory system in order to increase the overall safety margin in a variety of common traffic situations. We have selected two application examples, vehicle tracking and pedestrian tracking, to demonstrate that the framework can provide precisely the type of information required to effectively manage the given situation.
On the use of orientation filters for 3D reconstruction in event-driven stereo vision
Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe
2014-01-01
The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694
A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications.
Moeys, Diederik Paul; Corradi, Federico; Li, Chenghan; Bamford, Simeon A; Longinotti, Luca; Voigt, Fabian F; Berry, Stewart; Taverni, Gemma; Helmchen, Fritjof; Delbruck, Tobi
2018-02-01
Applications requiring detection of small visual contrast require high sensitivity. Event cameras can provide higher dynamic range (DR) and reduce data rate and latency, but most existing event cameras have limited sensitivity. This paper presents the results of a 180-nm Towerjazz CIS process vision sensor called SDAVIS192. It outputs temporal contrast dynamic vision sensor (DVS) events and conventional active pixel sensor frames. The SDAVIS192 improves on previous DAVIS sensors with higher sensitivity for temporal contrast. The temporal contrast thresholds can be set down to 1% for negative changes in logarithmic intensity (OFF events) and down to 3.5% for positive changes (ON events). The achievement is possible through the adoption of an in-pixel preamplification stage. This preamplifier reduces the effective intrascene DR of the sensor (70 dB for OFF and 50 dB for ON), but an automated operating region control allows up to at least 110-dB DR for OFF events. A second contribution of this paper is the development of characterization methodology for measuring DVS event detection thresholds by incorporating a measure of signal-to-noise ratio (SNR). At average SNR of 30 dB, the DVS temporal contrast threshold fixed pattern noise is measured to be 0.3%-0.8% temporal contrast. Results comparing monochrome and RGBW color filter array DVS events are presented. The higher sensitivity of SDAVIS192 make this sensor potentially useful for calcium imaging, as shown in a recording from cultured neurons expressing calcium sensitive green fluorescent protein GCaMP6f.
Helicopter synthetic vision based DVE processing for all phases of flight
NASA Astrophysics Data System (ADS)
O'Brien, Patrick; Baughman, David C.; Wallace, H. Bruce
2013-05-01
Helicopters experience nearly 10 times the accident rate of fixed wing platforms, due largely to the nature of their mission, frequently requiring operations in close proximity to terrain and obstacles. Degraded visual environments (DVE), including brownout or whiteout conditions generated by rotor downwash, result in loss of situational awareness during the most critical phase of flight, and contribute significantly to this accident rate. Considerable research into sensor and system solutions to address DVE has been conducted in recent years; however, the promise of a Synthetic Vision Avionics Backbone (SVAB) extends far beyond DVE, enabling improved situational awareness and mission effectiveness during all phases of flight and in all visibility conditions. The SVAB fuses sensor information with high resolution terrain databases and renders it in synthetic vision format for display to the crew. Honeywell was awarded the DARPA MFRF Technical Area 2 contract in 2011 to develop an SVAB1. This work includes creation of a common sensor interface, development of SVAB hardware and software, and flight demonstration on a Black Hawk helicopter. A "sensor agnostic" SVAB allows platform and mission diversity with efficient upgrade path, even while research continues into new and improved sensors for use in DVE conditions. Through careful integration of multiple sources of information such as sensors, terrain and obstacle databases, mission planning information, and aircraft state information, operations in all conditions and phases of flight can be enhanced. This paper describes the SVAB and its functionality resulting from the DARPA contract as well as Honeywell RD investment.
High dynamic range vision sensor for automotive applications
NASA Astrophysics Data System (ADS)
Grenet, Eric; Gyger, Steve; Heim, Pascal; Heitger, Friedrich; Kaess, Francois; Nussbaum, Pascal; Ruedi, Pierre-Francois
2005-02-01
A 128 x 128 pixels, 120 dB vision sensor extracting at the pixel level the contrast magnitude and direction of local image features is used to implement a lane tracking system. The contrast representation (relative change of illumination) delivered by the sensor is independent of the illumination level. Together with the high dynamic range of the sensor, it ensures a very stable image feature representation even with high spatial and temporal inhomogeneities of the illumination. Dispatching off chip image feature is done according to the contrast magnitude, prioritizing features with high contrast magnitude. This allows to reduce drastically the amount of data transmitted out of the chip, hence the processing power required for subsequent processing stages. To compensate for the low fill factor (9%) of the sensor, micro-lenses have been deposited which increase the sensitivity by a factor of 5, corresponding to an equivalent of 2000 ASA. An algorithm exploiting the contrast representation output by the vision sensor has been developed to estimate the position of a vehicle relative to the road markings. The algorithm first detects the road markings based on the contrast direction map. Then, it performs quadratic fits on selected kernel of 3 by 3 pixels to achieve sub-pixel accuracy on the estimation of the lane marking positions. The resulting precision on the estimation of the vehicle lateral position is 1 cm. The algorithm performs efficiently under a wide variety of environmental conditions, including night and rainy conditions.
A QUANTITATIVE COMPARISON OF LUNAR ORBITAL NEUTRON DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eke, V. R.; Teodoro, L. F. A.; Lawrence, D. J.
2012-03-01
Data from the Lunar Exploration Neutron Detector (LEND) Collimated Sensors for Epithermal Neutrons (CSETN) are used in conjunction with a model based on results from the Lunar Prospector (LP) mission to quantify the extent of the background in the LEND CSETN. A simple likelihood analysis implies that at least 90% of the lunar component of the LEND CSETN flux results from high-energy epithermal (HEE) neutrons passing through the walls of the collimator. Thus, the effective FWHM of the LEND CSETN field of view is comparable to that of the omni-directional LP Neutron Spectrometer. The resulting map of HEE neutrons offersmore » the opportunity to probe the hydrogen abundance at low latitudes and to provide constraints on the distribution of lunar water.« less
Behavioral Mapless Navigation Using Rings
NASA Technical Reports Server (NTRS)
Monroe, Randall P.; Miller, Samuel A.; Bradley, Arthur T.
2012-01-01
This paper presents work on the development and implementation of a novel approach to robotic navigation. In this system, map-building and localization for obstacle avoidance are discarded in favor of moment-by-moment behavioral processing of the sonar sensor data. To accomplish this, we developed a network of behaviors that communicate through the passing of rings, data structures that are similar in form to the sonar data itself and express the decisions of each behavior. Through the use of these rings, behaviors can moderate each other, conflicting impulses can be mediated, and designers can easily connect modules to create complex emergent navigational techniques. We discuss the development of a number of these modules and their successful use as a navigation system in the Trinity omnidirectional robot.
Science Instruments and Sensors Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Barney, Rich; Zuber, Maria
2005-01-01
The Science Instruments and Sensors roadmaps include capabilities associated with the collection, detection, conversion, and processing of scientific data required to answer compelling science questions driven by the Vision for Space Exploration and The New Age of Exploration (NASA's Direction for 2005 & Beyond). Viewgraphs on these instruments and sensors are presented.
Intraluminal laser speckle rheology using an omni-directional viewing catheter
Wang, Jing; Hosoda, Masaki; Tshikudi, Diane M.; Hajjarian, Zeinab; Nadkarni, Seemantini K.
2016-01-01
A number of disease conditions in luminal organs are associated with alterations in tissue mechanical properties. Here, we report a new omni-directional viewing Laser Speckle Rheology (LSR) catheter for mapping the mechanical properties of luminal organs without the need for rotational motion. The LSR catheter incorporates multiple illumination fibers, an optical fiber bundle and a multi-faceted mirror to permit omni-directional viewing of the luminal wall. By retracting the catheter using a motor-drive assembly, cylindrical maps of tissue mechanical properties are reconstructed. Evaluation conducted in a test phantom with circumferentially-varying mechanical properties demonstrates the capability of the LSR catheter for the accurate mechanical assessment of luminal organs. PMID:28101407
Recent advances in the development and transfer of machine vision technologies for space
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P.; Pendleton, Thomas
1991-01-01
Recent work concerned with real-time machine vision is briefly reviewed. This work includes methodologies and techniques for optimal illumination, shape-from-shading of general (non-Lambertian) 3D surfaces, laser vision devices and technology, high level vision, sensor fusion, real-time computing, artificial neural network design and use, and motion estimation. Two new methods that are currently being developed for object recognition in clutter and for 3D attitude tracking based on line correspondence are discussed.
Vision servo of industrial robot: A review
NASA Astrophysics Data System (ADS)
Zhang, Yujin
2018-04-01
Robot technology has been implemented to various areas of production and life. With the continuous development of robot applications, requirements of the robot are also getting higher and higher. In order to get better perception of the robots, vision sensors have been widely used in industrial robots. In this paper, application directions of industrial robots are reviewed. The development, classification and application of robot vision servo technology are discussed, and the development prospect of industrial robot vision servo technology is proposed.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-29
... document refers to a system comprised of a head-up display, imaging sensor(s), and avionics interfaces that display the sensor imagery on the HUD, and which overlay that imagery with alpha-numeric and symbolic... the sensor imagery, with or without other flight information, on a head-down display. For clarity, the...
1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.
Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi
2015-04-01
Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.
Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J
2005-01-01
We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.
Study of providing omnidirectional vibration isolation to entire space shuttle payload packages
NASA Technical Reports Server (NTRS)
Chang, C. S.; Robinson, G. D.; Weber, D. E.
1974-01-01
Techniques to provide omnidirectional vibration isolation for a space shuttle payload package were investigated via a reduced-scale model. Development, design, fabrication, assembly and test evaluation of a 0.125-scale isolation model are described. Final drawings for fabricated mechanical components are identified, and prints of all drawings are included.
Development of a piecewise linear omnidirectional 3D image registration method
NASA Astrophysics Data System (ADS)
Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo
2016-12-01
This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.
Zhong, Sihua; Wang, Wenjie; Tan, Miao; Zhuang, Yufeng
2017-01-01
Abstract Large‐scale (156 mm × 156 mm) quasi‐omnidirectional solar cells are successfully realized and featured by keeping high cell performance over broad incident angles (θ), via employing Si nanopyramids (SiNPs) as surface texture. SiNPs are produced by the proposed metal‐assisted alkaline etching method, which is an all‐solution‐processed method and highly simple together with cost‐effective. Interestingly, compared to the conventional Si micropyramids (SiMPs)‐textured solar cells, the SiNPs‐textured solar cells possess lower carrier recombination and thus superior electrical performances, showing notable distinctions from other Si nanostructures‐textured solar cells. Furthermore, SiNPs‐textured solar cells have very little drop of quantum efficiency with increasing θ, demonstrating the quasi‐omnidirectional characteristic. As an overall result, both the SiNPs‐textured homojunction and heterojunction solar cells possess higher daily electric energy production with a maximum relative enhancement approaching 2.5%, when compared to their SiMPs‐textured counterparts. The quasi‐omnidirectional solar cell opens a new opportunity for photovoltaics to produce more electric energy with a low cost. PMID:29201616
Zhong, Sihua; Wang, Wenjie; Tan, Miao; Zhuang, Yufeng; Shen, Wenzhong
2017-11-01
Large-scale (156 mm × 156 mm) quasi-omnidirectional solar cells are successfully realized and featured by keeping high cell performance over broad incident angles (θ), via employing Si nanopyramids (SiNPs) as surface texture. SiNPs are produced by the proposed metal-assisted alkaline etching method, which is an all-solution-processed method and highly simple together with cost-effective. Interestingly, compared to the conventional Si micropyramids (SiMPs)-textured solar cells, the SiNPs-textured solar cells possess lower carrier recombination and thus superior electrical performances, showing notable distinctions from other Si nanostructures-textured solar cells. Furthermore, SiNPs-textured solar cells have very little drop of quantum efficiency with increasing θ, demonstrating the quasi-omnidirectional characteristic. As an overall result, both the SiNPs-textured homojunction and heterojunction solar cells possess higher daily electric energy production with a maximum relative enhancement approaching 2.5%, when compared to their SiMPs-textured counterparts. The quasi-omnidirectional solar cell opens a new opportunity for photovoltaics to produce more electric energy with a low cost.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
Assessing Impact of Dual Sensor Enhanced Flight Vision Systems on Departure Performance
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.
2016-01-01
Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS) may serve as game-changing technologies to meet the challenges of the Next Generation Air Transportation System and the envisioned Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety and operational tempos of current-day Visual Flight Rules operations irrespective of the weather and visibility conditions. One significant obstacle lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility and pilot workload of conducting departures and approaches on runways without centerline lighting in visibility as low as 300 feet runway visual range (RVR) by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance and workload was assessed. Using EFVS concepts during 300 RVR terminal operations on runways without centerline lighting appears feasible as all EFVS concepts had equivalent (or better) departure performance and landing rollout performance, without any workload penalty, than those flown with a conventional HUD to runways having centerline lighting. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.
Current state of the art of vision based SLAM
NASA Astrophysics Data System (ADS)
Muhammad, Naveed; Fofi, David; Ainouz, Samia
2009-02-01
The ability of a robot to localise itself and simultaneously build a map of its environment (Simultaneous Localisation and Mapping or SLAM) is a fundamental characteristic required for autonomous operation of the robot. Vision Sensors are very attractive for application in SLAM because of their rich sensory output and cost effectiveness. Different issues are involved in the problem of vision based SLAM and many different approaches exist in order to solve these issues. This paper gives a classification of state-of-the-art vision based SLAM techniques in terms of (i) imaging systems used for performing SLAM which include single cameras, stereo pairs, multiple camera rigs and catadioptric sensors, (ii) features extracted from the environment in order to perform SLAM which include point features and line/edge features, (iii) initialisation of landmarks which can either be delayed or undelayed, (iv) SLAM techniques used which include Extended Kalman Filtering, Particle Filtering, biologically inspired techniques like RatSLAM, and other techniques like Local Bundle Adjustment, and (v) use of wheel odometry information. The paper also presents the implementation and analysis of stereo pair based EKF SLAM for synthetic data. Results prove the technique to work successfully in the presence of considerable amounts of sensor noise. We believe that state of the art presented in the paper can serve as a basis for future research in the area of vision based SLAM. It will permit further research in the area to be carried out in an efficient and application specific way.
Automating the Processing of Earth Observation Data
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wan-Lin; Nemani, Ramakrishna; Votava, Petr
2003-01-01
NASA s vision for Earth science is to build a "sensor web": an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving this vision will require automation not only in the scheduling of the observations but also in the processing of the resulting data. To address this need, we are developing a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products.
Comparison of a piezoceramic transducer and an EMAT for the omnidirectional transduction of SH0
NASA Astrophysics Data System (ADS)
Gauthier, Baptiste; Thon, Aurelien; Belanger, Pierre
2018-04-01
The fundamental shear horizontal ultrasonic guided wave mode has unique properties for non-destructive testing as well as structural health monitoring applications. It is the only non-dispersive guided wave mode and it is not attenuated by fluid loading. Moreover, shear horizontal waves do not convert to other guided wave modes when interacting with a boundary or defect parallel to the direction of polarization. In many applications, omnidirectional transduction is preferred so as to maximize the inspection coverage. The omnidirectional transduction of the fundamental shear horizontal ultrasonic guided wave mode is, however, challenging because a torsional surface stress is required. This paper compares the performances of two concepts recently proposed in the literature: 1- a piezoceramic transducer and 2- an electromagnetic-acoustic transducer. The piezoceramic transducer uses 6 trapezoidal shear piezoelectric elements arranged on a discretized circle. The electromagnetic acoustic transducer concept consists of a pair of ring-type permanent magnets and a coil wrapped in the radial direction. In this paper, both transducers were designed to have a 150 kHz centre frequency. Experimental results were performed on a thin aluminum plate using both transducers. A 3D laser Doppler vibrometer was used to verify the omnidirectional nature, the mode selectivity and the frequency response of the transducers. The EMAT has undeniable advantages in terms of omnidirectionality and mode selectivity. However it has a larger footprint than the piezoceramic concept and is only suitable for the inspection of metallic structures.
Valente, Michael; Mispagel, Karen M; Tchorz, Juergen; Fabry, David
2006-06-01
Differences in performance between omnidirectional and directional microphones were evaluated between two loudspeaker conditions (single loudspeaker at 180 degrees; diffuse using eight loudspeakers set 45 degrees apart) and two types of noise (steady-state HINT noise; R-Space restaurant noise). Twenty-five participants were fit bilaterally with Phonak Perseo hearing aids using the manufacturer's recommended procedure. After wearing the hearing aids for one week, the parameters were fine-tuned based on subjective comments. Four weeks later, differences in performance between omnidirectional and directional microphones were assessed using HINT sentences presented at 0 degrees with the two types of background noise held constant at 65 dBA and under the two loudspeaker conditions. Results revealed significant differences in Reception Thresholds for Sentences (RTS in dB) where directional performance was significantly better than omnidirectional. Performance in the 180 degrees condition was significantly better than the diffuse condition, and performance was significantly better using the HINT noise in comparison to the R-Space restaurant noise. In addition, results revealed that within each loudspeaker array, performance was significantly better for the directional microphone. Looking across loudspeaker arrays, however, significant differences were not present in omnidirectional performance, but directional performance was significantly better in the 180 degrees condition when compared to the diffuse condition. These findings are discussed in terms of results reported in the past and counseling patients on the potential advantages of directional microphones as the listening situation and type of noise changes.
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III
2005-01-01
Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.
Assessing Dual Sensor Enhanced Flight Vision Systems to Enable Equivalent Visual Operations
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Etherington, Timothy J.; Severance, Kurt; Bailey, Randall E.; Williams, Steven P.; Harrison, Stephanie J.
2016-01-01
Flight deck-based vision system technologies, such as Synthetic Vision (SV) and Enhanced Flight Vision Systems (EFVS), may serve as a revolutionary crew/vehicle interface enabling technologies to meet the challenges of the Next Generation Air Transportation System Equivalent Visual Operations (EVO) concept - that is, the ability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. One significant challenge lies in the definition of required equipage on the aircraft and on the airport to enable the EVO concept objective. A motion-base simulator experiment was conducted to evaluate the operational feasibility, pilot workload and pilot acceptability of conducting straight-in instrument approaches with published vertical guidance to landing, touchdown, and rollout to a safe taxi speed in visibility as low as 300 ft runway visual range by use of onboard vision system technologies on a Head-Up Display (HUD) without need or reliance on natural vision. Twelve crews evaluated two methods of combining dual sensor (millimeter wave radar and forward looking infrared) EFVS imagery on pilot-flying and pilot-monitoring HUDs as they made approaches to runways with and without touchdown zone and centerline lights. In addition, the impact of adding SV to the dual sensor EFVS imagery on crew flight performance, workload, and situation awareness during extremely low visibility approach and landing operations was assessed. Results indicate that all EFVS concepts flown resulted in excellent approach path tracking and touchdown performance without any workload penalty. Adding SV imagery to EFVS concepts provided situation awareness improvements but no discernible improvements in flight path maintenance.
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
Design of a Vision-Based Sensor for Autonomous Pig House Cleaning
NASA Astrophysics Data System (ADS)
Braithwaite, Ian; Blanke, Mogens; Zhang, Guo-Qiang; Carstensen, Jens Michael
2005-12-01
Current pig house cleaning procedures are hazardous to the health of farm workers, and yet necessary if the spread of disease between batches of animals is to be satisfactorily controlled. Autonomous cleaning using robot technology offers salient benefits. This paper addresses the feasibility of designing a vision-based system to locate dirty areas and subsequently direct a cleaning robot to remove dirt. Novel results include the characterisation of the spectral properties of real surfaces and dirt in a pig house and the design of illumination to obtain discrimination of clean from dirty areas with a low probability of misclassification. A Bayesian discriminator is shown to be efficient in this context and implementation of a prototype tool demonstrates the feasibility of designing a low-cost vision-based sensor for autonomous cleaning.
Dissolvable tattoo sensors: from science fiction to a viable technology
NASA Astrophysics Data System (ADS)
Cheng, Huanyu; Yi, Ning
2017-01-01
Early surrealistic painting and science fiction movies have envisioned dissolvable tattoo electronic devices. In this paper, we will review the recent advances that transform that vision into a viable technology, with extended capabilities even beyond the early vision. Specifically, we focus on the discussion of a stretchable design for tattoo sensors and degradable materials for dissolvable sensors, in the form of inorganic devices with a performance comparable to modern electronics. Integration of these two technologies as well as the future developments of bio-integrated devices is also discussed. Many of the appealing ideas behind developments of these devices are drawn from nature and especially biological systems. Thus, bio-inspiration is believed to continue playing a key role in future devices for bio-integration and beyond.
Three-dimensional particle tracking velocimetry using dynamic vision sensors
NASA Astrophysics Data System (ADS)
Borer, D.; Delbruck, T.; Rösgen, T.
2017-12-01
A fast-flow visualization method is presented based on tracking neutrally buoyant soap bubbles with a set of neuromorphic cameras. The "dynamic vision sensors" register only the changes in brightness with very low latency, capturing fast processes at a low data rate. The data consist of a stream of asynchronous events, each encoding the corresponding pixel position, the time instant of the event and the sign of the change in logarithmic intensity. The work uses three such synchronized cameras to perform 3D particle tracking in a medium sized wind tunnel. The data analysis relies on Kalman filters to associate the asynchronous events with individual tracers and to reconstruct the three-dimensional path and velocity based on calibrated sensor information.
Real Time Target Tracking Using Dedicated Vision Hardware
NASA Astrophysics Data System (ADS)
Kambies, Keith; Walsh, Peter
1988-03-01
This paper describes a real-time vision target tracking system developed by Adaptive Automation, Inc. and delivered to NASA's Launch Equipment Test Facility, Kennedy Space Center, Florida. The target tracking system is part of the Robotic Application Development Laboratory (RADL) which was designed to provide NASA with a general purpose robotic research and development test bed for the integration of robot and sensor systems. One of the first RADL system applications is the closing of a position control loop around a six-axis articulated arm industrial robot using a camera and dedicated vision processor as the input sensor so that the robot can locate and track a moving target. The vision system is inside of the loop closure of the robot tracking system, therefore, tight throughput and latency constraints are imposed on the vision system that can only be met with specialized hardware and a concurrent approach to the processing algorithms. State of the art VME based vision boards capable of processing the image at frame rates were used with a real-time, multi-tasking operating system to achieve the performance required. This paper describes the high speed vision based tracking task, the system throughput requirements, the use of dedicated vision hardware architecture, and the implementation design details. Important to the overall philosophy of the complete system was the hierarchical and modular approach applied to all aspects of the system, hardware and software alike, so there is special emphasis placed on this topic in the paper.
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
Global Test Range: Toward Airborne Sensor Webs
NASA Technical Reports Server (NTRS)
Mace, Thomas H.; Freudinger, Larry; DelFrate John H.
2008-01-01
This viewgraph presentation reviews the planned global sensor network that will monitor the Earth's climate, and resources using airborne sensor systems. The vision is an intelligent, affordable Earth Observation System. Global Test Range is a lab developing trustworthy services for airborne instruments - a specialized Internet Service Provider. There is discussion of several current and planned missions.
Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions
Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Mª; de la Escalera, Arturo
2010-01-01
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. PMID:22163639
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.
Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo
2010-01-01
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.
Autonomous Kinematic Calibration of the Robot Manipulator with a Linear Laser-Vision Sensor
NASA Astrophysics Data System (ADS)
Kang, Hee-Jun; Jeong, Jeong-Woo; Shin, Sung-Weon; Suh, Young-Soo; Ro, Young-Schick
This paper presents a new autonomous kinematic calibration technique by using a laser-vision sensor called "Perceptron TriCam Contour". Because the sensor measures by capturing the image of a projected laser line on the surface of the object, we set up a long, straight line of a very fine string inside the robot workspace, and then allow the sensor mounted on a robot to measure the point intersection of the line of string and the projected laser line. The data collected by changing robot configuration and measuring the intersection points are constrained to on a single straght line such that the closed-loop calibration method can be applied. The obtained calibration method is simple and accurate and also suitable for on-site calibration in an industrial environment. The method is implemented using Hyundai VORG-35 for its effectiveness.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
Proposal of Screening Method of Sleep Disordered Breathing Using Fiber Grating Vision Sensor
NASA Astrophysics Data System (ADS)
Aoki, Hirooki; Nakamura, Hidetoshi; Nakajima, Masato
Every conventional respiration monitoring technique requires at least one sensor to be attached to the body of the subject during measurement, thereby imposing a sense of restraint that results in aversion against measurements that would last over consecutive days. To solve this problem, we developed a respiration monitoring system for sleepers, and it uses a fiber-grating vision sensor, which is a type of active image sensor to achieve non-contact respiration monitoring. In this paper, we verified the effectiveness of the system, and proposed screening method of the sleep disordered breathing. It was shown that our system could equivalently measure the respiration with thermistor and accelerograph. And, the respiratory condition of sleepers can be grasped by our screening method in one look, and it seems to be useful for the support of the screening of sleep disordered breathing.
Máthé, Koppány; Buşoniu, Lucian
2015-01-01
Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations. PMID:26121608
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tai, A; Currey, A; Li, X Allen
2016-06-15
Purpose: Radiation therapy (RT) of left sided breast cancers with deep-inspiratory breathhold (DIBH) can reduce the dose to heart. The purpose of this study is to develop and test a new laser-based tool to improve ease of RT delivery using DIBH. Methods: A laser sensor together with breathing monitor device (Anzai Inc., Japan) was used to record the surface breathing motion of phantom/volunteers. The device projects a laser beam to the chestwall and the reflected light creates a focal spot on a light detecting element. The position change of the focal spot correlates with the patient’s breathing motion and ismore » measured through the change of current in the light detecting element. The signal is amplified and displayed on a computer screen, which is used to trigger radiation gating. The laser sensor can be easily mounted to the simulation/treatment couch with a fixing plate and a magnet base, and has a sensitivity range of 10 to 40 cm from the patient. The correlation of breathing signals detected by laser sensor and visionRT is also investigated. Results: It is found that the measured breathing signal from the laser sensor is stable and reproducible and has no noticeable delay. It correlates well with the VisionRT surface imaging system. The DIBH reference level does not change with movement of the couch because the laser sensor and couch move together. Conclusion: The Anzai laser sensor provides a cost-effective way to improve beam gating with DIBH for treating left breast cancer. It can be used alone or together with VisionRT to determine the correct DIBH level during the radiation treatment of left breast cancer with DIBH.« less
Autonomous omnidirectional spacecraft antenna system
NASA Technical Reports Server (NTRS)
Taylor, T. H.
1983-01-01
The development of a low gain Electronically Switchable Spherical Array Antenna is discussed. This antenna provides roughly 7 dBic gain for receive/transmit operation between user satellites and the Tracking and Data Relay Satellite System. When used as a pair, the antenna provides spherical coverage. The antenna was tested in its primary operating modes: directed beam, retrodirective, and Omnidirectional.
NASA Astrophysics Data System (ADS)
Zhang, Wenzeng; Chen, Nian; Wang, Bin; Cao, Yipeng
2005-01-01
Rocket engine is a hard-core part of aerospace transportation and thrusting system, whose research and development is very important in national defense, aviation and aerospace. A novel vision sensor is developed, which can be used for error detecting in arc length control and seam tracking in precise pulse TIG welding of the extending part of the rocket engine jet tube. The vision sensor has many advantages, such as imaging with high quality, compactness and multiple functions. The optics design, mechanism design and circuit design of the vision sensor have been described in detail. Utilizing the mirror imaging of Tungsten electrode in the weld pool, a novel method is proposed to detect the arc length and seam tracking error of Tungsten electrode to the center line of joint seam from a single weld image. A calculating model of the method is proposed according to the relation of the Tungsten electrode, weld pool, the mirror of Tungsten electrode in weld pool and joint seam. The new methodologies are given to detect the arc length and seam tracking error. Through analyzing the results of the experiments, a system error modifying method based on a linear function is developed to improve the detecting precise of arc length and seam tracking error. Experimental results show that the final precision of the system reaches 0.1 mm in detecting the arc length and the seam tracking error of Tungsten electrode to the center line of joint seam.
Method of orthogonally splitting imaging pose measurement
NASA Astrophysics Data System (ADS)
Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong
2018-01-01
In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.
Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-il
2017-01-01
This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel’s global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point’s plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel. PMID:28141829
Xu, Zirui; Yang, Wei; You, Kaiming; Li, Wei; Kim, Young-Il
2017-01-01
This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel's global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point's plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel.
NASA Astrophysics Data System (ADS)
Cheong, M. K.; Bahiki, M. R.; Azrad, S.
2016-10-01
The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.
A Vision-Based Motion Sensor for Undergraduate Laboratories.
ERIC Educational Resources Information Center
Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees
2002-01-01
Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
Chen, Qing; Xu, Pengfei; Liu, Wenzhong
2016-01-01
Computer vision as a fast, low-cost, noncontact, and online monitoring technology has been an important tool to inspect product quality, particularly on a large-scale assembly production line. However, the current industrial vision system is far from satisfactory in the intelligent perception of complex grain images, comprising a large number of local homogeneous fragmentations or patches without distinct foreground and background. We attempt to solve this problem based on the statistical modeling of spatial structures of grain images. We present a physical explanation in advance to indicate that the spatial structures of the complex grain images are subject to a representative Weibull distribution according to the theory of sequential fragmentation, which is well known in the continued comminution of ore grinding. To delineate the spatial structure of the grain image, we present a method of multiscale and omnidirectional Gaussian derivative filtering. Then, a product quality classifier based on sparse multikernel–least squares support vector machine is proposed to solve the low-confidence classification problem of imbalanced data distribution. The proposed method is applied on the assembly line of a food-processing enterprise to classify (or identify) automatically the production quality of rice. The experiments on the real application case, compared with the commonly used methods, illustrate the validity of our method. PMID:26986726
Phan-Quang, Gia Chuong; Lee, Hiang Kwee; Teng, Hao Wen; Koh, Charlynn Sher Lin; Yim, Barnabas Qinwei; Tan, Eddie Khay Ming; Tok, Wee Lee; Phang, In Yee; Ling, Xing Yi
2018-05-14
Molecular-level airborne sensing is critical for early prevention of disasters, diseases, and terrorism. Currently, most 2D surface-enhanced Raman spectroscopy (SERS) substrates used for air sensing have only one functional surface and exhibit poor SERS-active depth. "Aerosolized plasmonic colloidosomes" (APCs) are introduced as airborne plasmonic hotspots for direct in-air SERS measurements. APCs function as a macroscale 3D and omnidirectional plasmonic cloud that receives laser irradiation and emits signals in all directions. Importantly, it brings about an effective plasmonic hotspot in a length scale of approximately 2.3 cm, which affords 100-fold higher tolerance to laser misalignment along the z-axis compared with 2D SERS substrates. APCs exhibit an extraordinary omnidirectional property and demonstrate consistent SERS performance that is independent of the laser and analyte introductory pathway. Furthermore, the first in-air SERS detection is demonstrated in stand-off conditions at a distance of 200 cm, highlighting the applicability of 3D omnidirectional plasmonic clouds for remote airborne sensing in threatening or inaccessible areas. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wide-angle vision for road views
NASA Astrophysics Data System (ADS)
Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.
2013-03-01
The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.
Information Weighted Consensus for Distributed Estimation in Vision Networks
ERIC Educational Resources Information Center
Kamal, Ahmed Tashrif
2013-01-01
Due to their high fault-tolerance, ease of installation and scalability to large networks, distributed algorithms have recently gained immense popularity in the sensor networks community, especially in computer vision. Multi-target tracking in a camera network is one of the fundamental problems in this domain. Distributed estimation algorithms…
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
Real-time object tracking based on scale-invariant features employing bio-inspired hardware.
Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya
2016-09-01
We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.
Conceptual Design Standards for eXternal Visibility System (XVS) Sensor and Display Resolution
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Wilz, Susan J.; Arthur, Jarvis J, III
2012-01-01
NASA is investigating eXternal Visibility Systems (XVS) concepts which are a combination of sensor and display technologies designed to achieve an equivalent level of safety and performance to that provided by forward-facing windows in today s subsonic aircraft. This report provides the background for conceptual XVS design standards for display and sensor resolution. XVS resolution requirements were derived from the basis of equivalent performance. Three measures were investigated: a) human vision performance; b) see-and-avoid performance and safety; and c) see-to-follow performance. From these three factors, a minimum but perhaps not sufficient resolution requirement of 60 pixels per degree was shown for human vision equivalence. However, see-and-avoid and see-to-follow performance requirements are nearly double. This report also reviewed historical XVS testing.
A BHR Composite Network-Based Visualization Method for Deformation Risk Level of Underground Space
Zheng, Wei; Zhang, Xiaoya; Lu, Qi
2015-01-01
This study proposes a visualization processing method for the deformation risk level of underground space. The proposed method is based on a BP-Hopfield-RGB (BHR) composite network. Complex environmental factors are integrated in the BP neural network. Dynamic monitoring data are then automatically classified in the Hopfield network. The deformation risk level is combined with the RGB color space model and is displayed visually in real time, after which experiments are conducted with the use of an ultrasonic omnidirectional sensor device for structural deformation monitoring. The proposed method is also compared with some typical methods using a benchmark dataset. Results show that the BHR composite network visualizes the deformation monitoring process in real time and can dynamically indicate dangerous zones. PMID:26011618
NASA Astrophysics Data System (ADS)
Baek, Jong-In; Kim, Ki-Han; Kim, Jae Chang; Yoon, Tae-Hoon
2010-01-01
This paper proposes a method of omni-directional viewing-angle switching by controlling the beam diverging angle (BDA) in a liquid crystal (LC) panel. The LCs aligned randomly by in-cell polymer structures diffuse the collimated backlight for the bright state of the wide viewing-angle mode. We align the LCs homogeneously by applying an in-plane field for the narrow viewing-angle mode. By doing this the scattering is significantly reduced so that the small BDA is maintained as it passes through the LC layer. The dark state can be obtained by aligning the LCs homeotropically with a vertical electric field. We demonstrated experimentally the omni-directional switching of the viewing-angle, without an additional panel or backlighting system.
Swap intensified WDR CMOS module for I2/LWIR fusion
NASA Astrophysics Data System (ADS)
Ni, Yang; Noguier, Vincent
2015-05-01
The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.
Multiple-modality program for standoff detection of roadside hazards
NASA Astrophysics Data System (ADS)
Williams, Kathryn; Middleton, Seth; Close, Ryan; Luke, Robert H.; Suri, Rajiv
2016-05-01
The U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) is executing a program to assess the performance of a variety of sensor modalities for standoff detection of roadside explosive hazards. The program objective is to identify an optimal sensor or combination of fused sensors to incorporate with autonomous detection algorithms into a system of systems for use in future route clearance operations. This paper provides an overview of the program, including a description of the sensors under consideration, sensor test events, and ongoing data analysis.
Experimental study of transport of a dimer on a vertically oscillating plate
Wang, Jiao; Liu, Caishan; Ma, Daolin
2014-01-01
It has recently been shown that a dimer, composed of two identical spheres rigidly connected by a rod, under harmonic vertical vibration can exhibit a self-ordered transport behaviour. In this case, the mass centre of the dimer will perform a circular orbit in the horizontal plane, or a straight line if confined between parallel walls. In order to validate the numerical discoveries, we experimentally investigate the temporal evolution of the dimer's motion in both two- and three-dimensional situations. A stereoscopic vision method with a pair of high-speed cameras is adopted to perform omnidirectional measurements. All the cases studied in our experiments are also simulated using an existing numerical model. The combined investigations detail the dimer's dynamics and clearly show that its transport behaviours originate from a series of combinations of different contact states. This series is critical to our understanding of the transport properties in the dimer's motion and related self-ordered phenomena in granular systems. PMID:25383029
Khan, M Nisa
2015-07-20
Light-emitting diode (LED) technologies are undergoing very fast developments to enable household lamp products with improved energy efficiency and lighting properties at lower cost. Although many LED replacement lamps are claimed to provide similar or better lighting quality at lower electrical wattage compared with general-purpose incumbent lamps, certain lighting characteristics important to human vision are neglected in this comparison, which include glare-free illumination and omnidirectional or sufficiently broad light distribution with adequate homogeneity. In this paper, we comprehensively investigate the thermal and lighting performance and trade-offs for several commercial LED replacement lamps for the most popular Edison incandescent bulb. We present simulations and analyses for thermal and optical performance trade-offs for various LED lamps at the chip and module granularity levels. In addition, we present a novel, glare-free, and production-friendly LED lamp design optimized to produce very desirable light distribution properties as demonstrated by our simulation results, some of which are verified by experiments.
2000-06-01
As the number of sensors, platforms, exploitation sites, and command and control nodes continues to grow in response to Joint Vision 2010 information ... dominance requirements, Commanders and analysts will have an ever increasing need to collect and process vast amounts of data over wide areas using a large number of disparate sensors and information gathering sources.
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.
2003-01-01
A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.
Sensor Characteristics Reference Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cree, Johnathan V.; Dansu, A.; Fuhr, P.
The Buildings Technologies Office (BTO), within the U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy (EERE), is initiating a new program in Sensor and Controls. The vision of this program is: • Buildings operating automatically and continuously at peak energy efficiency over their lifetimes and interoperating effectively with the electric power grid. • Buildings that are self-configuring, self-commissioning, self-learning, self-diagnosing, self-healing, and self-transacting to enable continuous peak performance. • Lower overall building operating costs and higher asset valuation. The overarching goal is to capture 30% energy savings by enhanced management of energy consuming assets and systemsmore » through development of cost-effective sensors and controls. One step in achieving this vision is the publication of this Sensor Characteristics Reference Guide. The purpose of the guide is to inform building owners and operators of the current status, capabilities, and limitations of sensor technologies. It is hoped that this guide will aid in the design and procurement process and result in successful implementation of building sensor and control systems. DOE will also use this guide to identify research priorities, develop future specifications for potential market adoption, and provide market clarity through unbiased information« less
Alatise, Mary B; Hancke, Gerhard P
2017-09-21
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).
Hancke, Gerhard P.
2017-01-01
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs). PMID:28934102
Monovision techniques for telerobots
NASA Technical Reports Server (NTRS)
Goode, P. W.; Carnils, K.
1987-01-01
The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.
Stereo vision with distance and gradient recognition
NASA Astrophysics Data System (ADS)
Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu
2007-12-01
Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.
Vehicle-based vision sensors for intelligent highway systems
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1989-09-01
This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.
Compact, self-contained enhanced-vision system (EVS) sensor simulator
NASA Astrophysics Data System (ADS)
Tiana, Carlo
2007-04-01
We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
NASA Astrophysics Data System (ADS)
Prachachet, R.; Samransuksamer, B.; Horprathum, M.; Eiamchai, P.; Limwichean, S.; Chananonnawathorn, C.; Lertvanithphol, T.; Muthitamongkol, P.; Boonruang, S.; Buranasiri, P.
2018-02-01
Fabricated omnidirectional anti-reflection nanostructure films as a one of the promising alternative solar cell applications have attracted enormous scientific and industrial research benefits to their broadband, effective over a wide range of incident angles, lithography-free and high-throughput process. Recently, the nanostructure SiO2 film was the most inclusive study on anti-reflection with omnidirectional and broadband characteristics. In this work, the three-dimensional silicon dioxide (SiO2) nanostructured thin film with different morphologies including vertical align, slant, spiral and thin films were fabricated by electron beam evaporation with glancing angle deposition (GLAD) on the glass slide and silicon wafer substrate. The morphological of the prepared samples were characterized by field-emission scanning electron microscope (FE-SEM) and high-resolution transmission electron microscope (HRTEM). The transmission, omnidirectional and birefringence property of the nanostructure SiO2 films were investigated by UV-Vis-NIR spectrophotometer and variable angle spectroscopic ellipsometer (VASE). The spectrophotometer measurement was performed at normal incident angle and a full spectral range of 200 - 2000 nm. The angle dependent transmission measurements were investigated by rotating the specimen, with incidence angle defined relative to the surface normal of the prepared samples. This study demonstrates that the obtained SiO2 nanostructure film coated on glass slide substrate exhibits a higher transmission was 93% at normal incident angle. In addition, transmission measurement in visible wavelength and wide incident angles -80 to 80 were increased in comparison with the SiO2 thin film and glass slide substrate due to the transition in the refractive index profile from air to the nanostructure layer that improve the antireflection characteristics. The results clearly showed the enhanced omnidirectional and broadband characteristic of the three dimensional SiO2 nanostructure film coating.
NASA Astrophysics Data System (ADS)
Rao, Jionghui; Yao, Wenming; Wen, Linqiang
2015-10-01
Underwater wireless optical communication is a communication technology which uses laser as an information carrier and transmits data through water. Underwater wireless optical communication has some good features such as broader bandwidth, high transmission rate, better security, anti—interference performance. Therefore, it is promising to be widely used in the civil and military communication domains. It is also suitable for high-speed, short-range communication between underwater mobile vehicles. This paper presents a design approach of omni-directional light source used in underwater wireless optical communication, using TRACEPRO simulation tool to help design a combination solid composed of the lens, conical reflector and parabolic reflector, and using the modulated DPSS green laser in the transmitter module to output the laser beam in small divergence angles, after expanded by the combination refraction-reflection solid, the angle turns into a space divergence angle of 2π, achieving the omni-directional light source of hemisphere space, and test in the air and underwater, the result shows that the effect is fine. This paper analyzes the experimental test in the air and water, in order to make further improvement of the uniformity of light distribution, we optimize the reflector surface parameters of combination refraction-reflection solid and test in the air and water. The result shows that omni-directional light source used in underwater wireless optical communication optimized could achieve the uniformity of light distribution of underwater space divergence angle of 2π. Omni-directional light source used in underwater wireless optical communication designed in this paper has the characteristics of small size and uniformity of light distribution, it is suitable for application between UUVs, AUVs, Swimmer Delivery Vehicles (SDVs) and other underwater vehicle fleet, it realizes point-to-multipoint communications.
Smart unattended sensor networks with scene understanding capabilities
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2006-05-01
Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.
Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor
Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong
2011-01-01
In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104
NASA Astrophysics Data System (ADS)
Crawford, Bobby Grant
In an effort to field smaller and cheaper Uninhabited Aerial Vehicles (UAVs), the Army has expressed an interest in an ability of the vehicle to autonomously detect and avoid obstacles. Current systems are not suitable for small aircraft. NASA Langley Research Center has developed a vision sensing system that uses small semiconductor cameras. The feasibility of using this sensor for the purpose of autonomous obstacle avoidance by a UAV is the focus of the research presented in this document. The vision sensor characteristics are modeled and incorporated into guidance and control algorithms designed to generate flight commands based on obstacle information received from the sensor. The system is evaluated by simulating the response to these flight commands using a six degree-of-freedom, non-linear simulation of a small, fixed wing UAV. The simulation is written using the MATLAB application and runs on a PC. Simulations were conducted to test the longitudinal and lateral capabilities of the flight control for a range of airspeeds, camera characteristics, and wind speeds. Results indicate that the control system is suitable for obstacle avoiding flight control using the simulated vision system. In addition, a method for designing and evaluating the performance of such a system has been developed that allows the user to easily change component characteristics and evaluate new systems through simulation.
Airborne sensors for detecting large marine debris at sea.
Veenstra, Timothy S; Churnside, James H
2012-01-01
The human eye is an excellent, general-purpose airborne sensor for detecting marine debris larger than 10 cm on or near the surface of the water. Coupled with the human brain, it can adjust for light conditions and sea-surface roughness, track persistence, differentiate color and texture, detect change in movement, and combine all of the available information to detect and identify marine debris. Matching this performance with computers and sensors is difficult at best. However, there are distinct advantages over the human eye and brain that sensors and computers can offer such as the ability to use finer spectral resolution, to work outside the spectral range of human vision, to control the illumination, to process the information in ways unavailable to the human vision system, to provide a more objective and reproducible result, to operate from unmanned aircraft, and to provide a permanent record that can be used for later analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Vision Guided Intelligent Robot Design And Experiments
NASA Astrophysics Data System (ADS)
Slutzky, G. D.; Hall, E. L.
1988-02-01
The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.
Binocular adaptive optics visual simulator.
Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo
2009-09-01
A binocular adaptive optics visual simulator is presented. The instrument allows for measuring and manipulating ocular aberrations of the two eyes simultaneously, while the subject performs visual testing under binocular vision. An important feature of the apparatus consists on the use of a single correcting device and wavefront sensor. Aberrations are controlled by means of a liquid-crystal-on-silicon spatial light modulator, where the two pupils of the subject are projected. Aberrations from the two eyes are measured with a single Hartmann-Shack sensor. As an example of the potential of the apparatus for the study of the impact of the eye's aberrations on binocular vision, results of contrast sensitivity after addition of spherical aberration are presented for one subject. Different binocular combinations of spherical aberration were explored. Results suggest complex binocular interactions in the presence of monochromatic aberrations. The technique and the instrument might contribute to the better understanding of binocular vision and to the search for optimized ophthalmic corrections.
Technology for robotic surface inspection in space
NASA Technical Reports Server (NTRS)
Volpe, Richard; Balaram, J.
1994-01-01
This paper presents on-going research in robotic inspection of space platforms. Three main areas of investigation are discussed: machine vision inspection techniques, an integrated sensor end-effector, and an orbital environment laboratory simulation. Machine vision inspection utilizes automatic comparison of new and reference images to detect on-orbit induced damage such as micrometeorite impacts. The cameras and lighting used for this inspection are housed in a multisensor end-effector, which also contains a suite of sensors for detection of temperature, gas leaks, proximity, and forces. To fully test all of these sensors, a realistic space platform mock-up has been created, complete with visual, temperature, and gas anomalies. Further, changing orbital lighting conditions are effectively mimicked by a robotic solar simulator. In the paper, each of these technology components will be discussed, and experimental results are provided.
NASA Astrophysics Data System (ADS)
Åström, Anders; Forchheimer, Robert
2012-03-01
Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.
JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.
Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun
2017-03-01
Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.
Omni-directional Particle Detector (ODPD) on Tiangong-2 Spacecraft
NASA Astrophysics Data System (ADS)
Guohong, S.; Zhang, S.; Yang, X.; Wang, C.
2017-12-01
Tiangong-2 spacecraft is the second space laboratory independently developed by china after Tiangong-1, which was launched on 15 September 2016. It is also the first real space laboratory in china, which will be used to further validate the space rendezvous and docking technology and to carry out a series of space tests. The spacecraft's orbit is 350km height and 42° inclination. The omni-directional particle detector (ODPD) on Tiangong-2 spacecraft is a new instrument developed by China. Its goal is the anisotropy and energy spectra of space particles on manned space flight orbit. The ODPD measures the energy spectra and pitch angle distributions of high energy electrons and protons. It consists of one electron spectrum telescope, one proton spectrum telescope and sixteen directional flux telescopes. The ODPD is designed to measure the protons spectrum from 2.5MeV to 150MeV, electrons spectrum from 0.2MeV to 1.5MeV, the flux of electrons energy >200keV and protons energy>1.5MeV on 2∏ space, also the ODPD has a small sensor to measure the LET spectrum from 1-100MeV/cm2sr. The primary advantage can give the particle's pitch angle distributions at any time because of the sixteen flux telescopes arrange form 0 to 180 degree. This is the first paper dealing with ODPD data, so we firstly spend some time describing the instrument, its theory of operation and its calibration. Then we give the preliminary detecting results.
Kim, In-Ho; Jeon, Haemin; Baek, Seung-Chan; Hong, Won-Hwa; Jung, Hyung-Jo
2018-06-08
Bridge inspection using unmanned aerial vehicles (UAV) with high performance vision sensors has received considerable attention due to its safety and reliability. As bridges become obsolete, the number of bridges that need to be inspected increases, and they require much maintenance cost. Therefore, a bridge inspection method based on UAV with vision sensors is proposed as one of the promising strategies to maintain bridges. In this paper, a crack identification method by using a commercial UAV with a high resolution vision sensor is investigated in an aging concrete bridge. First, a point cloud-based background model is generated in the preliminary flight. Then, cracks on the structural surface are detected with the deep learning algorithm, and their thickness and length are calculated. In the deep learning method, region with convolutional neural networks (R-CNN)-based transfer learning is applied. As a result, a new network for the 384 collected crack images of 256 × 256 pixel resolution is generated from the pre-trained network. A field test is conducted to verify the proposed approach, and the experimental results proved that the UAV-based bridge inspection is effective at identifying and quantifying the cracks on the structures.
An Energy Efficient Power Control Protocol for Ad Hoc Networks Using Directional Antennas
NASA Astrophysics Data System (ADS)
Quiroz-Perez, Carlos; Gulliver, T. Aaron
A wireless ad hoc network is a collection of mobile nodes that can communicate with each other. Typically, nodes employ omnidirectional antennas. The use of directional antennas can increase spatial reuse, reduce the number of hops to a destination, reduce interference, and increase the transmission range in a specific direction. This is because omnidirectional antennas radiate equally in all directions, limiting the transmission range.
Airborne antenna polarization study for the microwave landing system
NASA Technical Reports Server (NTRS)
Gilreath, M. C.
1976-01-01
The feasibility of the microwave landing system (MLS) airborne antenna pattern coverage requirements are investigated for a large commercial aircraft using a single omnidirectional antenna. Omnidirectional antennas having vertical and horizontal polarizations were evaluated at several different station locations on a one-eleventh scale model Boeing 737 aircraft. The results obtained during this experimental program are presented which include principal plane antenna patterns and complete volumetric coverage plots.
The design and fabrication of microstrip omnidirectional array antennas for aerospace applications
NASA Technical Reports Server (NTRS)
Campbell, T. G.; Appleton, M. W.; Lusby, T. K.
1976-01-01
A microstrip antenna design concept was developed that will provide quasi-omnidirectional radiation pattern characteristics about cylindrical and conical aerospace structures. L-band and S-band antenna arrays were designed, fabricated, and, in some cases, flight tested for rocket, satellite, and aircraft drone applications. Each type of array design is discussed along with a thermal cover design that was required for the sounding rocket applications.
NASA Astrophysics Data System (ADS)
Shin, Keun-Young; Kim, Minkyu; Lee, James S.; Jang, Jyongsik
2015-09-01
Highly omnidirectional and frequency controllable carbon/polyaniline (C/PANI)-based, two- (2D) and three-dimensional (3D) monopole antennas were fabricated using screen-printing and a one-step, dimensionally confined hydrothermal strategy, respectively. Solvated C/PANI was synthesized by low-temperature interfacial polymerization, during which strong π-π interactions between graphene and the quinoid rings of PANI resulted in an expanded PANI conformation with enhanced crystallinity and improved mechanical and electrical properties. Compared to antennas composed of pristine carbon or PANI-based 2D monopole structures, 2D monopole antennas composed of this enhanced hybrid material were highly efficient and amenable to high-frequency, omnidirectional electromagnetic waves. The mean frequency of C/PANI fiber-based 3D monopole antennas could be controlled by simply cutting and stretching the antenna. These antennas attained high peak gain (3.60 dBi), high directivity (3.91 dBi) and radiation efficiency (92.12%) relative to 2D monopole antenna. These improvements were attributed the high packing density and aspect ratios of C/PANI fibers and the removal of the flexible substrate. This approach offers a valuable and promising tool for producing highly omnidirectional and frequency-controllable, carbon-based monopole antennas for use in wireless networking communications on industrial, scientific, and medical (ISM) bands.
Development of an Omnidirectional-Capable Electromagnetic Shock Wave Generator for Lipolysis
Lin, San Yih
2017-01-01
Traditional methods for adipose tissue removal have progressed from invasive methods such as liposuction to more modern methods of noninvasive lipolysis. This research entails the development and evaluation of an omnidirectional-capable flat-coil electromagnetic shock wave generator (EMSWG) for lipolysis. The developed EMSWG has the advantage of omnidirectional-capable operation. This capability increases the eventual clinical usability by adding three designed supports to the aluminum disk of the EMSWG to allow omnidirectional operation. The focal pressures of the developed EMSWG for different operating voltages were measured, and its corresponding energy intensities were calculated. The developed EMSWG was mounted in a downward orientation for lipolysis and evaluated as proof of concept. In vitro tests on porcine fatty tissues have been carried out. It is found that at a 6 kV operating voltage with 1500 shock wave exposures, a 2 cm thick subcutaneous hypodermis of porcine fatty tissue can be ruptured, resulting in a damaged area of 1.39 mm2. At a 6.5 kV operating voltage with 2000 shock wave exposures, the damaged area is increased to about 5.20 mm2, which can be enlarged by changing the focal point location, resulting in significant lipolysis for use in clinical applications. PMID:29065664
Shin, Keun-Young; Kim, Minkyu; Lee, James S.; Jang, Jyongsik
2015-01-01
Highly omnidirectional and frequency controllable carbon/polyaniline (C/PANI)-based, two- (2D) and three-dimensional (3D) monopole antennas were fabricated using screen-printing and a one-step, dimensionally confined hydrothermal strategy, respectively. Solvated C/PANI was synthesized by low-temperature interfacial polymerization, during which strong π–π interactions between graphene and the quinoid rings of PANI resulted in an expanded PANI conformation with enhanced crystallinity and improved mechanical and electrical properties. Compared to antennas composed of pristine carbon or PANI-based 2D monopole structures, 2D monopole antennas composed of this enhanced hybrid material were highly efficient and amenable to high-frequency, omnidirectional electromagnetic waves. The mean frequency of C/PANI fiber-based 3D monopole antennas could be controlled by simply cutting and stretching the antenna. These antennas attained high peak gain (3.60 dBi), high directivity (3.91 dBi) and radiation efficiency (92.12%) relative to 2D monopole antenna. These improvements were attributed the high packing density and aspect ratios of C/PANI fibers and the removal of the flexible substrate. This approach offers a valuable and promising tool for producing highly omnidirectional and frequency-controllable, carbon-based monopole antennas for use in wireless networking communications on industrial, scientific, and medical (ISM) bands. PMID:26338090
Development of an Omnidirectional-Capable Electromagnetic Shock Wave Generator for Lipolysis.
Chang, Ming Hau; Lin, San Yih
2017-01-01
Traditional methods for adipose tissue removal have progressed from invasive methods such as liposuction to more modern methods of noninvasive lipolysis. This research entails the development and evaluation of an omnidirectional-capable flat-coil electromagnetic shock wave generator (EMSWG) for lipolysis. The developed EMSWG has the advantage of omnidirectional-capable operation. This capability increases the eventual clinical usability by adding three designed supports to the aluminum disk of the EMSWG to allow omnidirectional operation. The focal pressures of the developed EMSWG for different operating voltages were measured, and its corresponding energy intensities were calculated. The developed EMSWG was mounted in a downward orientation for lipolysis and evaluated as proof of concept. In vitro tests on porcine fatty tissues have been carried out. It is found that at a 6 kV operating voltage with 1500 shock wave exposures, a 2 cm thick subcutaneous hypodermis of porcine fatty tissue can be ruptured, resulting in a damaged area of 1.39 mm 2 . At a 6.5 kV operating voltage with 2000 shock wave exposures, the damaged area is increased to about 5.20 mm 2 , which can be enlarged by changing the focal point location, resulting in significant lipolysis for use in clinical applications.
NASA Astrophysics Data System (ADS)
Agrawal, Navik; Davis, Christopher C.
2008-08-01
Omnidirectional free space optical communication receivers can employ multiple non-imaging collectors, such as compound parabolic concentrators (CPCs), in an array-like fashion to increase the amount of possible light collection. CPCs can effectively channel light collected over a large aperture to a small area photodiode. The aperture to length ratio of such devices can increase the overall size of the transceiver unit, which may limit the practicality of such systems, especially when small size is desired. New non-imaging collector designs with smaller sizes, larger field of view (FOV), and comparable transmission curves to CPCs, offer alternative transceiver designs. This paper examines how transceiver performance is affected by the use of different non-imaging collector shapes that are designed for wide FOV with reduced efficiency compared with shapes such as the CPC that are designed for small FOV with optimal efficiency. Theoretical results provide evidence indicating that array-like transceiver designs using various non-imaging collector shapes with less efficient transmission curves, but a larger FOV will be an effective means for the design of omnidirectional optical transceiver units. The results also incorporate the effects of Fresnel loss at the collector exit aperture-photodiode interface, which is an important consideration for indoor omnidirectional FSO systems.
Human movement activity classification approaches that use wearable sensors and mobile devices
NASA Astrophysics Data System (ADS)
Kaghyan, Sahak; Sarukhanyan, Hakob; Akopian, David
2013-03-01
Cell phones and other mobile devices become part of human culture and change activity and lifestyle patterns. Mobile phone technology continuously evolves and incorporates more and more sensors for enabling advanced applications. Latest generations of smart phones incorporate GPS and WLAN location finding modules, vision cameras, microphones, accelerometers, temperature sensors etc. The availability of these sensors in mass-market communication devices creates exciting new opportunities for data mining applications. Particularly healthcare applications exploiting build-in sensors are very promising. This paper reviews different approaches of human activity recognition.
Biological basis for space-variant sensor design I: parameters of monkey and human spatial vision
NASA Astrophysics Data System (ADS)
Rojer, Alan S.; Schwartz, Eric L.
1991-02-01
Biological sensor design has long provided inspiration for sensor design in machine vision. However relatively little attention has been paid to the actual design parameters provided by biological systems as opposed to the general nature of biological vision architectures. In the present paper we will provide a review of current knowledge of primate spatial vision design parameters and will present recent experimental and modeling work from our lab which demonstrates that a numerical conformal mapping which is a refinement of our previous complex logarithmic model provides the best current summary of this feature of the primate visual system. In this paper we will review recent work from our laboratory which has characterized some of the spatial architectures of the primate visual system. In particular we will review experimental and modeling studies which indicate that: . The global spatial architecture of primate visual cortex is well summarized by a numerical conformal mapping whose simplest analytic approximation is the complex logarithm function . The columnar sub-structure of primate visual cortex can be well summarized by a model based on a band-pass filtered white noise. We will also refer to ongoing work in our lab which demonstrates that: . The joint columnar/map structure of primate visual cortex can be modeled and summarized in terms of a new algorithm the ''''proto-column'''' algorithm. This work provides a reference-point for current engineering approaches to novel architectures for
Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2016-10-01
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Internal high-reflectivity omni-directional reflectors
NASA Astrophysics Data System (ADS)
Xi, J.-Q.; Ojha, Manas; Plawsky, J. L.; Gill, W. N.; Kim, Jong Kyu; Schubert, E. F.
2005-07-01
An internal high-reflectivity omni-directional reflector (ODR) for the visible spectrum is realized by the combination of total internal reflection using a low-refractive-index (low-n) material and reflection from a one-dimensional photonic crystal (1D PC). The low-n layer limits the range of angles in the 1D PC to values below the Brewster angle, thereby enabling high reflectivity and omni-directionality. This ODR is demonstrated using GaP as ambient, nanoporous SiO2 with a very low refractive index (n=1.10), and a four-pair TiO2/SiO2 multilayer stack. The results indicate a two orders of magnitude lower angle-integrated transverse-electric-transverse-magnetic polarization averaged mirror loss of the ODR compared with conventional distributed Bragg reflectors and metal reflectors. This indicates the high potential of the internal ODRs for optoelectronic semiconductor devices, e.g., light-emitting diodes.
Hierarchical Graphene Foam for Efficient Omnidirectional Solar-Thermal Energy Conversion.
Ren, Huaying; Tang, Miao; Guan, Baolu; Wang, Kexin; Yang, Jiawei; Wang, Feifan; Wang, Mingzhan; Shan, Jingyuan; Chen, Zhaolong; Wei, Di; Peng, Hailin; Liu, Zhongfan
2017-10-01
Efficient solar-thermal energy conversion is essential for the harvesting and transformation of abundant solar energy, leading to the exploration and design of efficient solar-thermal materials. Carbon-based materials, especially graphene, have the advantages of broadband absorption and excellent photothermal properties, and hold promise for solar-thermal energy conversion. However, to date, graphene-based solar-thermal materials with superior omnidirectional light harvesting performances remain elusive. Herein, hierarchical graphene foam (h-G foam) with continuous porosity grown via plasma-enhanced chemical vapor deposition is reported, showing dramatic enhancement of broadband and omnidirectional absorption of sunlight, which thereby can enable a considerable elevation of temperature. Used as a heating material, the external solar-thermal energy conversion efficiency of the h-G foam impressively reaches up to ≈93.4%, and the solar-vapor conversion efficiency exceeds 90% for seawater desalination with high endurance. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
2004-10-01
The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.
A survey of camera error sources in machine vision systems
NASA Astrophysics Data System (ADS)
Jatko, W. B.
In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.
Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers
Olivares-Mendez, Miguel A.; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F.; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual
2015-01-01
Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing. PMID:26703597
Towards an Autonomous Vision-Based Unmanned Aerial System against Wildlife Poachers.
Olivares-Mendez, Miguel A; Fu, Changhong; Ludivig, Philippe; Bissyandé, Tegawendé F; Kannan, Somasundar; Zurad, Maciej; Annaiyan, Arun; Voos, Holger; Campoy, Pascual
2015-12-12
Poaching is an illegal activity that remains out of control in many countries. Based on the 2014 report of the United Nations and Interpol, the illegal trade of global wildlife and natural resources amounts to nearly $ 213 billion every year, which is even helping to fund armed conflicts. Poaching activities around the world are further pushing many animal species on the brink of extinction. Unfortunately, the traditional methods to fight against poachers are not enough, hence the new demands for more efficient approaches. In this context, the use of new technologies on sensors and algorithms, as well as aerial platforms is crucial to face the high increase of poaching activities in the last few years. Our work is focused on the use of vision sensors on UAVs for the detection and tracking of animals and poachers, as well as the use of such sensors to control quadrotors during autonomous vehicle following and autonomous landing.
Bio-Inspired Asynchronous Pixel Event Tricolor Vision Sensor.
Lenero-Bardallo, Juan Antonio; Bryn, D H; Hafliger, Philipp
2014-06-01
This article investigates the potential of the first ever prototype of a vision sensor that combines tricolor stacked photo diodes with the bio-inspired asynchronous pixel event communication protocol known as Address Event Representation (AER). The stacked photo diodes are implemented in a 22 × 22 pixel array in a standard STM 90 nm CMOS process. Dynamic range is larger than 60 dB and pixels fill factor is 28%. The pixels employ either simple pulse frequency modulation (PFM) or a Time-to-First-Spike (TFS) mode. A heuristic linear combination of the chip's inherent pseudo colors serves to approximate RGB color representation. Furthermore, the sensor outputs can be processed to represent the radiation in the near infrared (NIR) band without employing external filters, and to color-encode direction of motion due to an asymmetry in the update rates of the different diode layers.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa
2016-08-08
We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.
High-accuracy microassembly by intelligent vision systems and smart sensor integration
NASA Astrophysics Data System (ADS)
Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael
2003-10-01
Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.
2009-03-01
infrared, thermal , or night vision applications. Understanding the true capabilities and limitations of the ALAN camera and its applicability to a...an option to more expensive infrared, thermal , or night vision applications. Ultimately, it will be clear whether the configuration of the Kestrel...45 A. THERMAL CAMERAS................................................................................45 1
NASA Astrophysics Data System (ADS)
McKinley, John B.; Pierson, Roger; Ertem, M. C.; Krone, Norris J., Jr.; Cramer, James A.
2008-04-01
Flight tests were conducted at Greenbrier Valley Airport (KLWB) and Easton Municipal Airport / Newnam Field (KESN) in a Cessna 402B aircraft using a head-up display (HUD) and a Norris Electro Optical Systems Corporation (NEOC) developmental ultraviolet (UV) sensor. These flights were sponsored by NEOC under a Federal Aviation Administration program, and the ultraviolet concepts, technology, system mechanization, and hardware for landing during low visibility landing conditions have been patented by NEOC. Imagery from the UV sensor, HUD guidance cues, and out-the-window videos were separately recorded at the engineering workstation for each approach. Inertial flight path data were also recorded. Various configurations of portable UV emitters were positioned along the runway edge and threshold. The UV imagery of the runway outline was displayed on the HUD along with guidance generated from the mission computer. Enhanced Flight Vision System (EFVS) approaches with the UV sensor were conducted from the initial approach fix to the ILS decision height in both VMC and IMC. Although the availability of low visibility conditions during the flight test period was limited, results from previous fog range testing concluded that UV EFVS has the performance capability to penetrate CAT II runway visual range obscuration. Furthermore, independent analysis has shown that existing runway light emit sufficient UV radiation without the need for augmentation other than lens replacement with UV transmissive quartz lenses. Consequently, UV sensors should qualify as conforming to FAA requirements for EFVS approaches. Combined with Synthetic Vision System (SVS), UV EFVS would function as both a precision landing aid, as well as an integrity monitor for the GPS and SVS database.
UT Austin Villa 2011: 3D Simulation Team Report
2011-01-01
inverted pendulum model omnidirectional walk engine based on one that was originally designed for the real Nao robot [7]. The omnidirectional walk is...using a double linear inverted pendulum , where the center of mass is swinging over the stance foot. In addition, as in Graf et al.’s work [7], we use...between the inverted pendulums formed by the respective stance feet. Notation Description maxStep∗i Maximum step sizes allowed for x, y, and θ y
VHF Omnidirectional Radio Range (VOR) Electromagnetic Spectrum Measurements.
1978-10-18
MAINTENANCE AND INSPECTION OF VOR, DVOR FACILITIES. 9-42 mouce & Io 10/18/78 Page 9-1 VHF OMNI-DIRECTIONAL RADIO RANGE (VOR) ELECTROMAGNETIC SPECTRUM...developed by the rotating sideband pattern 0r Pattern shown at North 00 North position Reference30 R--Variable ....uRlerent Cardioid-shaped Field Pattern...to their respective antenna pairs (which are 1800 out of phase with each other). This combination creates a two lobe field pattern rotating at 30 rps
NASA Astrophysics Data System (ADS)
Awasthi, Suneet Kumar; Panda, Ranjita; Chauhan, Prashant Kumar; Shiveshwari, Laxmi
2018-05-01
By using the transfer matrix method, theoretical investigations have been carried out in the microwave region to study the reflection properties of multichannel tunable omnidirectional photonic bandgaps (OPBGs) based on the magneto-optic Faraday effect. The proposed one dimensional ternary plasma photonic crystal consists of alternate layers of quartz, magnetized cold plasma (MCP), and air. In the absence of an external magnetic field, the proposed structure possesses two OPBGs induced by Bragg scattering and is strongly dependent on the incident angle, the polarization of the incident light, and the lattice constant unlike to the single-negative gap and zero- n ¯ gap. Next, the reflection properties of OPBGs have been made tunable by the application of external magnetic field under right hand and left hand polarization configurations. The results of this manuscript may be utilized for the development of a new kind of tunable omnidirectional band stop filter with ability to completely stop single to multiple bands (called channels) of microwave frequencies in the presence of external static magnetic field under left-hand polarization and right-hand polarization configurations, respectively. Moreover, outcomes of this study open a promising way to design tunable magneto-optical devices, omnidirectional total reflectors, and planar waveguides of high Q microcavities as a result of evanescent fields in the MCP layer to allow propagation of light.
A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS)
Rigi, Amin
2018-01-01
In this paper, a novel approach to detect incipient slip based on the contact area between a transparent silicone medium and different objects using a neuromorphic event-based vision sensor (DAVIS) is proposed. Event-based algorithms are developed to detect incipient slip, slip, stress distribution and object vibration. Thirty-seven experiments were performed on five objects with different sizes, shapes, materials and weights to compare precision and response time of the proposed approach. The proposed approach is validated by using a high speed constitutional camera (1000 FPS). The results indicate that the sensor can detect incipient slippage with an average of 44.1 ms latency in unstructured environment for various objects. It is worth mentioning that the experiments were conducted in an uncontrolled experimental environment, therefore adding high noise levels that affected results significantly. However, eleven of the experiments had a detection latency below 10 ms which shows the capability of this method. The results are very promising and show a high potential of the sensor being used for manipulation applications especially in dynamic environments. PMID:29364190
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
Sensor Webs as Virtual Data Systems for Earth Science
NASA Astrophysics Data System (ADS)
Moe, K. L.; Sherwood, R.
2008-05-01
The NASA Earth Science Technology Office established a 3-year Advanced Information Systems Technology (AIST) development program in late 2006 to explore the technical challenges associated with integrating sensors, sensor networks, data assimilation and modeling components into virtual data systems called "sensor webs". The AIST sensor web program was initiated in response to a renewed emphasis on the sensor web concepts. In 2004, NASA proposed an Earth science vision for a more robust Earth observing system, coupled with remote sensing data analysis tools and advances in Earth system models. The AIST program is conducting the research and developing components to explore the technology infrastructure that will enable the visionary goals. A working statement for a NASA Earth science sensor web vision is the following: On-demand sensing of a broad array of environmental and ecological phenomena across a wide range of spatial and temporal scales, from a heterogeneous suite of sensors both in-situ and in orbit. Sensor webs will be dynamically organized to collect data, extract information from it, accept input from other sensor / forecast / tasking systems, interact with the environment based on what they detect or are tasked to perform, and communicate observations and results in real time. The focus on sensor webs is to develop the technology and prototypes to demonstrate the evolving sensor web capabilities. There are 35 AIST projects ranging from 1 to 3 years in duration addressing various aspects of sensor webs involving space sensors such as Earth Observing-1, in situ sensor networks such as the southern California earthquake network, and various modeling and forecasting systems. Some of these projects build on proof-of-concept demonstrations of sensor web capabilities like the EO-1 rapid fire response initially implemented in 2003. Other projects simulate future sensor web configurations to evaluate the effectiveness of sensor-model interactions for producing improved science predictions. Still other projects are maturing technology to support autonomous operations, communications and system interoperability. This paper will highlight lessons learned by various projects during the first half of the AIST program. Several sensor web demonstrations have been implemented and resulting experience with evolving standards, such as the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) among others, will be featured. The role of sensor webs in support of the intergovernmental Group on Earth Observations' Global Earth Observation System of Systems (GEOSS) will also be discussed. The GEOSS vision is a distributed system of systems that builds on international components to supply observing and processing systems that are, in the whole, comprehensive, coordinated and sustained. Sensor web prototypes are under development to demonstrate how remote sensing satellite data, in situ sensor networks and decision support systems collaborate in applications of interest to GEO, such as flood monitoring. Furthermore, the international Committee on Earth Observation Satellites (CEOS) has stepped up to the challenge to provide the space-based systems component for GEOSS. CEOS has proposed "virtual constellations" to address emerging data gaps in environmental monitoring, avoid overlap among observing systems, and make maximum use of existing space and ground assets. Exploratory applications that support the objectives of virtual constellations will also be discussed as a future role for sensor webs.
Foreword to the theme issue on geospatial computer vision
NASA Astrophysics Data System (ADS)
Wegner, Jan Dirk; Tuia, Devis; Yang, Michael; Mallet, Clement
2018-06-01
Geospatial Computer Vision has become one of the most prevalent emerging fields of investigation in Earth Observation in the last few years. In this theme issue, we aim at showcasing a number of works at the interface between remote sensing, photogrammetry, image processing, computer vision and machine learning. In light of recent sensor developments - both from the ground as from above - an unprecedented (and ever growing) quantity of geospatial data is available for tackling challenging and urgent tasks such as environmental monitoring (deforestation, carbon sequestration, climate change mitigation), disaster management, autonomous driving or the monitoring of conflicts. The new bottleneck for serving these applications is the extraction of relevant information from such large amounts of multimodal data. This includes sources, stemming from multiple sensors, that exhibit distinct physical nature of heterogeneous quality, spatial, spectral and temporal resolutions. They are as diverse as multi-/hyperspectral satellite sensors, color cameras on drones, laser scanning devices, existing open land-cover geodatabases and social media. Such core data processing is mandatory so as to generate semantic land-cover maps, accurate detection and trajectories of objects of interest, as well as by-products of superior added-value: georeferenced data, images with enhanced geometric and radiometric qualities, or Digital Surface and Elevation Models.
Adhikari, Shyam Prasad; Yang, Changju; Slot, Krzysztof; Kim, Hyongsuk
2018-01-10
This paper presents a vision sensor-based solution to the challenging problem of detecting and following trails in highly unstructured natural environments like forests, rural areas and mountains, using a combination of a deep neural network and dynamic programming. The deep neural network (DNN) concept has recently emerged as a very effective tool for processing vision sensor signals. A patch-based DNN is trained with supervised data to classify fixed-size image patches into "trail" and "non-trail" categories, and reshaped to a fully convolutional architecture to produce trail segmentation map for arbitrary-sized input images. As trail and non-trail patches do not exhibit clearly defined shapes or forms, the patch-based classifier is prone to misclassification, and produces sub-optimal trail segmentation maps. Dynamic programming is introduced to find an optimal trail on the sub-optimal DNN output map. Experimental results showing accurate trail detection for real-world trail datasets captured with a head mounted vision system are presented.
NASA Astrophysics Data System (ADS)
Lauinger, N.
2007-09-01
A better understanding of the color constancy mechanism in human color vision [7] can be reached through analyses of photometric data of all illuminants and patches (Mondrians or other visible objects) involved in visual experiments. In Part I [3] and in [4, 5 and 6] the integration in the human eye of the geometrical-optical imaging hardware and the diffractive-optical hardware has been described and illustrated (Fig.1). This combined hardware represents the main topic of the NAMIROS research project (nano- and micro- 3D gratings for optical sensors) [8] promoted and coordinated by Corrsys 3D Sensors AG. The hardware relevant to (photopic) human color vision can be described as a diffractive or interference-optical correlator transforming incident light into diffractive-optical RGB data and relating local RGB onto global RGB data in the near-field behind the 'inverted' human retina. The relative differences at local/global RGB interference-optical contrasts are available to photoreceptors (cones and rods) only after this optical pre-processing.
Remote environmental sensor array system
NASA Astrophysics Data System (ADS)
Hall, Geoffrey G.
This thesis examines the creation of an environmental monitoring system for inhospitable environments. It has been named The Remote Environmental Sensor Array System or RESA System for short. This thesis covers the development of RESA from its inception, to the design and modeling of the hardware and software required to make it functional. Finally, the actual manufacture, and laboratory testing of the finished RESA product is discussed and documented. The RESA System is designed as a cost-effective way to bring sensors and video systems to the underwater environment. It contains as water quality probe with sensors such as dissolved oxygen, pH, temperature, specific conductivity, oxidation-reduction potential and chlorophyll a. In addition, an omni-directional hydrophone is included to detect underwater acoustic signals. It has a colour, high-definition and a low-light, black and white camera system, which it turn are coupled to a laser scaling system. Both high-intensity discharge and halogen lighting system are included to illuminate the video images. The video and laser scaling systems are manoeuvred using pan and tilt units controlled from an underwater computer box. Finally, a sediment profile imager is included to enable profile images of sediment layers to be acquired. A control and manipulation system to control the instruments and move the data across networks is integrated into the underwater system while a power distribution node provides the correct voltages to power the instruments. Laboratory testing was completed to ensure that the different instruments associated with the RESA performed as designed. This included physical testing of the motorized instruments, calibration of the instruments, benchmark performance testing and system failure exercises.
NASA Astrophysics Data System (ADS)
Prachachet, R.; Samransuksamer, B.; Horprathum, M.; Eiamchai, P.; Limwichean, S.; Chananonnawathorn, C.; Lertvanithphol, T.; Muthitamongkol, P.; Boonruang, S.; Buranasiri, P.
2018-03-01
Omnidirectional anti-reflection coating nanostructure film have attracted enormous attention for the developments of the optical coating, lenses, light emitting diode, display and photovoltaic. However, fabricated of the omnidirectional antireflection nanostructure film on glass substrate in large area was a challenge topic. In the past two decades, the invention of glancing angle deposition technique as a growth of well-controlled two and three-dimensional morphologies has gained significant attention because of it is simple, fast, cost-effective and high mass production capability. In this present work, the omnidirectional anti-reflection nanostructure coating namely silicon dioxide (SiO2) nanorods has been investigated for optimized high transparent layer at all light incident angle. The SiO2 nanorod films of an optimally low refractive index have been fabricated by electron beam evaporation with the glancing angle deposition technique. The morphological of the prepared sampled were characterized by field-emission scanning electron microscope (FE-SEM) and high-resolution transmission electron microscope (HRTEM). The optical transmission and omnidirectional property of the SiO2 nanorod films were investigated by UV-Vis-NIR spectrophotometer. The measurement were performed at normal incident angle and a full spectral range of 200 - 2000 nm. The angle dependent transmission measure were investigated by rotating the specimen, with incidence angle defined relative to the surface normal of the prepared samples. The morphological characterization results showed that when the glancing angle deposition technique was applied, the vertically align SiO2 nanorods with partially isolated columnar structure can be constructed due to the enhanced shadowing and limited addtom diffusion effect. The average transmission of the vertically align SiO2 nanorods were higher than the glass substrate reference sample over the visible wavelength range at all incident angle due to the transition in the refractive index profile from air to the nanostructure layer that improved the anti-reflection characteristics.
A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles
NASA Technical Reports Server (NTRS)
Delgado, Frank; Abernathy, Mike
2004-01-01
A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.
NASA Technical Reports Server (NTRS)
Ifju, Peter
2002-01-01
Micro Air Vehicles (MAVs) will be developed for tracking individuals, locating terrorist threats, and delivering remote sensors, for surveillance and chemical/biological agent detection. The tasks are: (1) Develop robust MAV platform capable of carrying sensor payload. (2) Develop fully autonomous capabilities for delivery of sensors to remote and distant locations. The current capabilities and accomplishments are: (1) Operational electric (inaudible) 6-inch MAVs with novel flexible wing, providing superior aerodynamic efficiency and control. (2) Vision-based flight stability and control (from on-board cameras).
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.
2004-01-01
The Sensor Web concept emerged as the number of Earth Science Satellites began to increase in the recent years. The idea, part of a vision for the future of earth science, was that the sensor systems would be linked in an active way to provide improved forecast capability. This means that a system that is nearly autonomous would need to be developed to allow the satellites to re-target and deploy assets for particular phenomena or provide on board processing for real time data. This talk will describe several elements of the sensor web.
Acoustic Coherent Backscatter Enhancement from Aggregations of Point Scatterers
2015-09-30
and far-field acoustic multiple scattering from two- and now three-dimensional aggregations of omnidirectional point scatterers to determine the...an aggregation of omnidirectional point scatterers [1]. If ψ(r) is the harmonic acoustic pressure field at frequency ω at the point r and ψ0(r) is... scattered field and is given by the sum in (1), N is the number of scatterers , gn is the scattering coefficient of the nth scatterer , ψn(rn) is the field
Omni-directional L-band antenna for mobile communications
NASA Technical Reports Server (NTRS)
Kim, C. S.; Moldovan, N.; Kijesky, J.
1988-01-01
The principle and design of an L-band omni-directional mobile communication antenna are discussed. The antenna is a circular wave guide aperture with hybrid circuits attached to higher order mode excitation. It produces polarized and symmetric two split beams in elevation. The circular waveguide is fed by eight probes with a 90 degree phase shift between their inputs. Radiation pattern characteristics are controlled by adjusting the aperture diameter and mode excitation. This antenna satisfies gain requirements as well as withstanding the harsh environment.
Yu, Dongliang; Yin, Min; Lu, Linfeng; Zhang, Hanzhong; Chen, Xiaoyuan; Zhu, Xufei; Che, Jianfei; Li, Dongdong
2015-11-01
High-performance thin-film hydrogenated amorphous silicon solar cells are achieved by combining macroscale 3D tubular substrates and nanoscaled 3D cone-like antireflective films. The tubular geometry delivers a series of advantages for large-scale deployment of photovoltaics, such as omnidirectional performance, easier encapsulation, decreased wind resistance, and easy integration with a second device inside the glass tube. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Huan, Qiang; Miao, Hongchen; Li, Faxin
2018-02-01
Structural health monitoring (SHM) is of great importance for engineering structures as it may detect the early degradation and thus avoid life and financial loss. Guided wave based inspection is very useful in SHM due to its capability for long distance and wide range monitoring. The fundamental shear horizontal (SH0) wave based method should be most promising since SH0 is the unique non-dispersive wave mode in plate-like structures. In this work, a sparse array SHM system based on omnidirectional SH wave piezoelectric transducers (OSH-PT) was proposed and the multi data fusion method was used for defect inspection in a 2 mm thick aluminum plate. Firstly, the performances of three types OSH-PTs was comprehensively compared and the thickness-poled d15 mode OSH-PT used in this work was demonstrated obviously superior to the other two. Then, the signal processing method and imaging algorithm for this SHM system was presented. Finally, experiments were carried out to examine the performance of the proposed SHM system in defect localization and imaging. Results indicated that this SHM system can locate a through hole as small as 0.12λ (4 mm) in diameter (where λ is the wavelength corresponding to the central operation frequency) under frequencies from 90 to 150 kHz. It can also locate multiple defects accurately based on the baseline subtraction method. Obviously, this SHM system can detect larger areas with sparse sensors because of the adopted single mode, non-dispersive and low frequency SH0 wave which can propagate long distance with small attenuation. Considering its good performances, simple data processing and sparse array, this SH0 wave-based SHM system is expected to greatly promote the applications of guided wave inspection.
Robotics research projects report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsia, T.C.
The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)
Images Revealing More Than a Thousand Words
NASA Technical Reports Server (NTRS)
2003-01-01
A unique sensor developed by ProVision Technologies, a NASA Commercial Space Center housed by the Institute for Technology Development, produces hyperspectral images with cutting-edge applications in food safety, skin health, forensics, and anti-terrorism activities. While hyperspectral imaging technology continues to make advances with ProVision Technologies, it has also been transferred to the commercial sector through a spinoff company, Photon Industries, Inc.
Influence of control parameters on the joint tracking performance of a coaxial weld vision system
NASA Technical Reports Server (NTRS)
Gangl, K. J.; Weeks, J. L.
1985-01-01
The first phase of a series of evaluations of a vision-based welding control sensor for the Space Shuttle Main Engine Robotic Welding System is described. The robotic welding system is presently under development at the Marshall Space Flight Center. This evaluation determines the standard control response parameters necessary for proper trajectory of the welding torch along the joint.
2013-05-01
and Sensors Directorate. • Study participants and physicians select treatment: PRK or LASIK . WFG vs . WFO treatment modality is randomized. The...to undergo wavefront-guided (WFG) photorefractive keratectomy ( PRK ), WFG laser in situ keratomileusis ( LASIK ), wavefront optimized (WFO) PRK or WFO...TERMS Military, Refractive Surgery, PRK , LASIK , Night Vision, Wavefront Optimized, Wavefront Guided, Visual Performance, Quality of Vision, Outcomes
Noncontacting Optical Measurement And Inspection Systems
NASA Astrophysics Data System (ADS)
Asher, Jeffrey A.; Jackson, Robert L.
1986-10-01
Product inspection continues to play a growing role in the improvement of quality and reduction of scrap. Recent emphasis on precision measurements and in-process inspection have been a driving force for the development of noncontacting sensors. Noncontacting sensors can provide long term, unattended use due to the lack of sensor wear. Further, in applications where, sensor contact can damage or geometrically change the part to be measured or inspected, noncontacting sensors are the only technical approach available. MTI is involved in the development and sale of noncontacting sensors and custom inspection systems. This paper will review the recent advances in noncontacting sensor development. Machine vision and fiber optics sensor systems are finding a wide variety of industrial inspection applications. This paper will provide detailed examples of several state-of-the-art applications for these noncontacting sensors.
Color rendering based on a plasmon fullerene cavity.
Tsai, Fu-Cheng; Weng, Cheng-Hsi; Chen, Yu Lim; Shih, Wen-Pin; Chang, Pei-Zen
2018-04-16
Fullerene in the plasmon fullerene cavity is utilized to propagate plasmon energy in order to break the confinement of the plasmonic coupling effect, which relies on the influential near-field optical region. It acts as a plasmonic inductor for coupling gold nano-islands to the gold film; the separation distances of the upper and lower layers are longer than conventional plasmonic cavities. This coupling effect causes the discrete and continuum states to cooperate together in a cavity and produces asymmetric curve lines in the spectra, producing a hybridized resonance. The effect brings about a bright and saturated displaying film with abundant visible colors. In addition, the reflection spectrum is nearly omnidirectional, shifting by only 5% even when the incident angle changes beyond ± 60°. These advantages allow plasmon fullerene cavities to be applied to reflectors, color filters, visible chromatic sensors, and large-area display.
Thin randomly aligned hierarchical carbon nanotube arrays as ultrablack metamaterials
NASA Astrophysics Data System (ADS)
De Nicola, Francesco; Hines, Peter; De Crescenzi, Maurizio; Motta, Nunzio
2017-07-01
Ultrablack metamaterials are artificial materials able to harvest all the incident light regardless of wavelength, angle, or polarization. Here, we show the ultrablack properties of randomly aligned hierarchical carbon nanotube arrays with thicknesses below 200 nm. The thin coatings are realized by solution processing and dry-transfer deposition on different substrates. The hierarchical surface morphology of the coatings is biomimetic and provides a large effective area that improves the film optical absorption. Also, such a morphology is responsible for the moth-eye effect, which leads to the omnidirectional and polarization-independent suppression of optical reflection. The films exhibit an emissivity up to 99.36% typical of an ideal black body, resulting in the thinnest ultrablack metamaterial ever reported. Such a material may be exploited for thermal, optical, and optoelectronic devices such as heat sinks, optical shields, solar cells, light and thermal sensors, and light-emitting diodes.
HIEN-LO: An experiment for charge determination of cosmic rays of interplanetary and solar origin
NASA Technical Reports Server (NTRS)
Klecker, B.; Hovestadt, D.; Mason, G. M.; Blake, J. B.; Nicholas, J.
1988-01-01
The experiment is designed to measure the heavy ion environment at low altitude (HIEN-LO) in the energy range 0.3 to 100 MeV/nucleon. In order to cover this wide energy range a complement of three sensors is used. A large area ion drift chamber and a time-of-flight telescope are used to determine the mass and energy of the incoming cosmic rays. A third omnidirectional counter serves as a proton monitor. The analysis of mass, energy and incoming direction in combination with the directional geomagnetic cut-off allows the determination of the ionic charge of the cosmic rays. The ionic charge in this energy range is of particular interest because it provides clues to the origin of these particles and to the plasma conditions at the acceleration site. The experiment is expected to be flown in 1988/1989.
Smart Distributed Sensor Fields: Algorithms for Tactical Sensors
2013-12-23
ranging from detecting, identifying, localizing/tracking interesting events, discarding irrelevant data, to providing actionable intelligence currently...tracking interesting events, discarding irrelevant data, to providing actionable intelligence currently requires significant human super- vision. Human...view of the overall system. The main idea is to reduce the problem to the relevant data, and then reason intelligently over that data. This process
NASA Astrophysics Data System (ADS)
Ning, Renxia; Bao, Jie; Jiao, Zheng; Xu, Yuan
2015-11-01
Tunable absorption based on graphene metamaterial with nanodisk structure at near-infrared frequency was investigated using the finite difference time domain method. The absorption of the nanodisk structure which consisting of Au-MgF2-graphene-Au-polyimide (from bottom to top) can be tuned by the chemical potential of graphene at certain diameter of nanodisk. The permittivity of graphene is discussed with different chemical potential to obtain tunable absorption. It is shown that the increased value of the chemical potential of graphene can lead to blue-shifted of the absorption peaks and the values decreased. Moreover, dual-band and triple-band absorption can be achieved for resonance frequencies at normal incidence. Compared with diameter of nanodisks, the multilayer structure shows multi-band absorber, and an omnidirectional absorption at 195.25 THz is insensitive to TE/TM polarization. This omnidirectional polarization insensitive absorption may be applied by optical communications such as optical absorber, near infrared stealth, and filter.
The use of multisensor data for robotic applications
NASA Technical Reports Server (NTRS)
Abidi, M. A.; Gonzalez, R. C.
1990-01-01
The feasibility of realistic autonomous space manipulation tasks using multisensory information is shown through two experiments involving a fluid interchange system and a module interchange system. In both cases, autonomous location of the mating element, autonomous location of the guiding light target, mating, and demating of the system were performed. Specifically, vision-driven techniques were implemented to determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. The robotic system was also equipped with a force/torque sensor that continuously monitored the six components of force and torque exerted on the end effector. Using vision, force, torque, proximity, and touch sensors, the two experiments were completed successfully and autonomously.
Theory research of seam recognition and welding torch pose control based on machine vision
NASA Astrophysics Data System (ADS)
Long, Qiang; Zhai, Peng; Liu, Miao; He, Kai; Wang, Chunyang
2017-03-01
At present, the automation requirement of the welding become higher, so a method of the welding information extraction by vision sensor is proposed in this paper, and the simulation with the MATLAB has been conducted. Besides, in order to improve the quality of robot automatic welding, an information retrieval method for welding torch pose control by visual sensor is attempted. Considering the demands of welding technology and engineering habits, the relative coordinate systems and variables are strictly defined, and established the mathematical model of the welding pose, and verified its feasibility by using the MATLAB simulation in the paper, these works lay a foundation for the development of welding off-line programming system with high precision and quality.
Saliency in VR: How Do People Explore Virtual Environments?
Sitzmann, Vincent; Serrano, Ana; Pavel, Amy; Agrawala, Maneesh; Gutierrez, Diego; Masia, Belen; Wetzstein, Gordon
2018-04-01
Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-basedcompression.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-12
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems
NASA Astrophysics Data System (ADS)
Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo
2017-01-01
Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
Engineering workstation: Sensor modeling
NASA Technical Reports Server (NTRS)
Pavel, M; Sweet, B.
1993-01-01
The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.
NASA Astrophysics Data System (ADS)
Yoo, Byungseok; Pines, Darryll J.
2018-05-01
This paper investigates the use of uniaxial comb-shaped Fe-Ga alloy (Galfenol) patches in the development of a Magnetostrictive Phased Array Sensor (MPAS) for the Guided Wave (GW) damage inspection technique. The MPAS consists of six highly-textured Galfenol patches with a <100> preferred orientation and a Hexagonal Magnetic Circuit Device (HMCD). The Galfenol patches individually aligned to distinct azimuthal directions were permanently attached to a thin aluminum plate specimen. The detachable HMCD encloses a biasing magnet and six sensing coils with unique directional sensing preferences, equivalent to the specific orientation of the discrete Galfenol patches. The preliminary experimental tests validated that the GW sensing performance and directional sensitivity of the Galfenol-based sensor were significantly improved by the magnetic shape anisotropy effect on the fabrication of uniaxial comb fingers to a Galfenol disc patch. We employed a series of uniaxial comb-shaped Galfenol patches to form an MPAS with a hexagonal sensor configuration, uniformly arranged within a diameter of 1". The Galfenol MPAS was utilized to identify structural damage simulated by loosening joint bolts used to fasten the plate specimen to a frame structure. We compared the damage detection results of the MPAS with those of a PZT Phased Array Sensor (PPAS) collocated to the back surface of the plate. The directional filtering characteristic of the Galfenol MPAS led to acquiring less complicated GW signals than the PPAS using omnidirectional PZT discs. However, due to the detection limit of the standard hexagonal patterned array, the two array sensors apparently identified only the loosened bolts located along one of the preferred orientations of the array configuration. The use of the fixed number of the Galfenol patches for the MPAS construction constrained the capability of sensing point multiplication of the HMCD by altering its rotational orientation, resulting in such damage detection limitation of the MPAS.
Contribution to the theory of photopic vision: Retinal phenomena
NASA Technical Reports Server (NTRS)
Calvet, H.
1979-01-01
Principles of thermodynamics are applied to the study of the ultramicroscopic anatomy of the inner eye. Concepts introduced and discussed include: the retina as a three-dimensional sensor, light signals as coherent beams in relation to the dimensions of retinal pigments, pigment effects topographed by the conjugated antennas effect, visualizing lights, the autotropic function of hemoglobin and some cytochromes, and reversible structural arrangements during photopic adaptation. A paleoecological diagram is presented which traces the evolution of scotopic vision (primitive system) to photopic vision (secondary system) through the emergence of structures sensitive to the intensity, temperature, and wavelengths of the visible range.
Sensing and Virtual Worlds - A Survey of Research Opportunities
NASA Technical Reports Server (NTRS)
Moore, Dana
2012-01-01
Virtual Worlds (VWs) have been used effectively in live and constructive military training. An area that remains fertile ground for exploration and a new vision involves integrating various traditional and now non-traditional sensors into virtual worlds. In this paper, we will assert that the benefits of this integration are several. First, we maintain that virtual worlds offer improved sensor deployment planning through improved visualization and stimulation of the model, using geo-specific terrain and structure. Secondly, we assert that VWs enhance the mission rehearsal process, and that using a mix of live avatars, non-player characters, and live sensor feeds (e.g. real time meteorology) can help visualization of the area of operations. Finally, tactical operations are improved via better collaboration and integration of real world sensing capabilities, and in most situations, 30 VWs improve the state of the art over current "dots on a map" 20 geospatial visualization. However, several capability gaps preclude a fuller realization of this vision. In this paper, we identify many of these gaps and suggest research directions
Real-time image processing of TOF range images using a reconfigurable processor system
NASA Astrophysics Data System (ADS)
Hussmann, S.; Knoll, F.; Edeler, T.
2011-07-01
During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.
Implementation of a robotic flexible assembly system
NASA Technical Reports Server (NTRS)
Benton, Ronald C.
1987-01-01
As part of the Intelligent Task Automation program, a team developed enabling technologies for programmable, sensory controlled manipulation in unstructured environments. These technologies include 2-D/3-D vision sensing and understanding, force sensing and high speed force control, 2.5-D vision alignment and control, and multiple processor architectures. The subsequent design of a flexible, programmable, sensor controlled robotic assembly system for small electromechanical devices is described using these technologies and ongoing implementation and integration efforts. Using vision, the system picks parts dumped randomly in a tray. Using vision and force control, it performs high speed part mating, in-process monitoring/verification of expected results and autonomous recovery from some errors. It is programmed off line with semiautomatic action planning.
Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles
NASA Technical Reports Server (NTRS)
Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick
2012-01-01
Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.
2011-02-07
Sensor UGVs (SUGV) or Disruptor UGVs, depending on their payload. The SUGVs included vision, GPS/IMU, and LIDAR systems for identifying and tracking...employed by all the MAGICian research groups. Objects of interest were tracked using standard LIDAR and Computer Vision template-based feature...tracking approaches. Mapping was solved through Multi-Agent particle-filter based Simultaneous Locali- zation and Mapping ( SLAM ). Our system contains
Adaptive ophthalmologic system
Olivier, Scot S.; Thompson, Charles A.; Bauman, Brian J.; Jones, Steve M.; Gavel, Don T.; Awwal, Abdul A.; Eisenbies, Stephen K.; Haney, Steven J.
2007-03-27
A system for improving vision that can diagnose monochromatic aberrations within a subject's eyes, apply the wavefront correction, and then enable the patient to view the results of the correction. The system utilizes a laser for producing a beam of light; a corrector; a wavefront sensor; a testing unit; an optic device for directing the beam of light to the corrector, to the retina, from the retina to the wavefront sensor, and to the testing unit; and a computer operatively connected to the wavefront sensor and the corrector.
Dual-mode lensless imaging device for digital enzyme linked immunosorbent assay
NASA Astrophysics Data System (ADS)
Sasagawa, Kiyotaka; Kim, Soo Heyon; Miyazawa, Kazuya; Takehara, Hironari; Noda, Toshihiko; Tokuda, Takashi; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun
2014-03-01
Digital enzyme linked immunosorbent assay (ELISA) is an ultra-sensitive technology for detecting biomarkers and viruses etc. As a conventional ELISA technique, a target molecule is bonded to an antibody with an enzyme by antigen-antibody reaction. In this technology, a femto-liter droplet chamber array is used as reaction chambers. Due to its small volume, the concentration of fluorescent product by single enzyme can be sufficient for detection by a fluorescent microscopy. In this work, we demonstrate a miniaturized lensless imaging device for digital ELISA by using a custom image sensor. The pixel array of the sensor is coated with a 20 μm-thick yellow filter to eliminate excitation light at 470 nm and covered by a fiber optic plate (FOP) to protect the sensor without resolution degradation. The droplet chamber array formed on a 50μm-thick glass plate is directly placed on the FOP. In the digital ELISA, microbeads coated with antibody are loaded into the droplet chamber array, and the ratio of the fluorescent to the non-fluorescent chambers with the microbeads are observed. In the fluorescence imaging, the spatial resolution is degraded by the spreading through the glass plate because the fluorescence is irradiated omnidirectionally. This degradation is compensated by image processing and the resolution of ~35 μm was achieved. In the bright field imaging, the projected images of the beads with collimated illumination are observed. By varying the incident angle and image composition, microbeads were successfully imaged.
Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles
Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger
2016-01-01
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365
Real-time Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.
2005-01-01
Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.
Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.
Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger
2016-03-11
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.
Real-time enhanced vision system
NASA Astrophysics Data System (ADS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.
2005-05-01
Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.
Real-time and accurate rail wear measurement method and experimental analysis.
Liu, Zhen; Li, Fengjiao; Huang, Bangkui; Zhang, Guangjun
2014-08-01
When a train is running on uneven or curved rails, it generates violent vibrations on the rails. As a result, the light plane of the single-line structured light vision sensor is not vertical, causing errors in rail wear measurements (referred to as vibration errors in this paper). To avoid vibration errors, a novel rail wear measurement method is introduced in this paper, which involves three main steps. First, a multi-line structured light vision sensor (which has at least two linear laser projectors) projects a stripe-shaped light onto the inside of the rail. Second, the central points of the light stripes in the image are extracted quickly, and the three-dimensional profile of the rail is obtained based on the mathematical model of the structured light vision sensor. Then, the obtained rail profile is transformed from the measurement coordinate frame (MCF) to the standard rail coordinate frame (RCF) by taking the three-dimensional profile of the measured rail waist as the datum. Finally, rail wear constraint points are adopted to simplify the location of the rail wear points, and the profile composed of the rail wear points are compared with the standard rail profile in RCF to determine the rail wear. Both real data experiments and simulation experiments show that the vibration errors can be eliminated when the proposed method is used.
A sensor and video based ontology for activity recognition in smart environments.
Mitchell, D; Morrow, Philip J; Nugent, Chris D
2014-01-01
Activity recognition is used in a wide range of applications including healthcare and security. In a smart environment activity recognition can be used to monitor and support the activities of a user. There have been a range of methods used in activity recognition including sensor-based approaches, vision-based approaches and ontological approaches. This paper presents a novel approach to activity recognition in a smart home environment which combines sensor and video data through an ontological framework. The ontology describes the relationships and interactions between activities, the user, objects, sensors and video data.
Omni-directional selective shielding material based on amorphous glass coated microwires.
Ababei, G; Chiriac, H; David, V; Dafinescu, V; Nica, I
2012-01-01
The shielding effectiveness of the omni-directional selective shielding material based on CoFe-glass coated amorphous wires in 0.8 GHz-3 GHz microwave frequency range is investigated. The measurements were done in a controlled medium using a TEM cell and in the free space using horn antennas, respectively. Experimental results indicate that the composite shielding material can be developed with desired shielding effectiveness and selective absorption of the microwave frequency range by controlling the number of the layers and the length of microwires.
The Trapped Radiation Handbook. Change 5,
1977-01-21
Omnidirectional flux confidence cod• s for AE S (1975 projacted) 4-15 4-2 Omnidirectional flux confidence codes for AE 6. 4-15 5-1 Extitation ionization...exhibits a siow " s --cular" variation that is charac ... terlstically a fraction of a percent change, in intensity per year. ’I’his i! phenomenon is...3M Rz (-a (~2+ 2)/ 2 2 M(Rý - Zz 13 +(R. + ~z) 1 TIhe magnetic monient (M)I of thle Earth’ s field is app~roximalte ly M,_-8. 07 X 1 OL5 gauss cm
Xia, Xinxing; Zheng, Zhenrong; Liu, Xu; Li, Haifeng; Yan, Caijie
2010-09-10
We utilized a high-frame-rate projector, a rotating mirror, and a cylindrical selective-diffusing screen to present a novel three-dimensional (3D) omnidirectional-view display system without the need for any special viewing aids. The display principle and image size are analyzed, and the common display zone is proposed. The viewing zone for one observation place is also studied. The experimental results verify this method, and a vivid color 3D scene with occlusion and smooth parallax is also demonstrated with the system.
3-D Imaging Systems for Agricultural Applications—A Review
Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.
2016-01-01
Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560
Bio-inspired approach for intelligent unattended ground sensors
NASA Astrophysics Data System (ADS)
Hueber, Nicolas; Raymond, Pierre; Hennequin, Christophe; Pichler, Alexander; Perrot, Maxime; Voisin, Philippe; Moeglin, Jean-Pierre
2015-05-01
Improving the surveillance capacity over wide zones requires a set of smart battery-powered Unattended Ground Sensors capable of issuing an alarm to a decision-making center. Only high-level information has to be sent when a relevant suspicious situation occurs. In this paper we propose an innovative bio-inspired approach that mimics the human bi-modal vision mechanism and the parallel processing ability of the human brain. The designed prototype exploits two levels of analysis: a low-level panoramic motion analysis, the peripheral vision, and a high-level event-focused analysis, the foveal vision. By tracking moving objects and fusing multiple criteria (size, speed, trajectory, etc.), the peripheral vision module acts as a fast relevant event detector. The foveal vision module focuses on the detected events to extract more detailed features (texture, color, shape, etc.) in order to improve the recognition efficiency. The implemented recognition core is able to acquire human knowledge and to classify in real-time a huge amount of heterogeneous data thanks to its natively parallel hardware structure. This UGS prototype validates our system approach under laboratory tests. The peripheral analysis module demonstrates a low false alarm rate whereas the foveal vision correctly focuses on the detected events. A parallel FPGA implementation of the recognition core succeeds in fulfilling the embedded application requirements. These results are paving the way of future reconfigurable virtual field agents. By locally processing the data and sending only high-level information, their energy requirements and electromagnetic signature are optimized. Moreover, the embedded Artificial Intelligence core enables these bio-inspired systems to recognize and learn new significant events. By duplicating human expertise in potentially hazardous places, our miniature visual event detector will allow early warning and contribute to better human decision making.
GeoCENS: a geospatial cyberinfrastructure for the world-wide sensor web.
Liang, Steve H L; Huang, Chih-Yuan
2013-10-02
The world-wide sensor web has become a very useful technique for monitoring the physical world at spatial and temporal scales that were previously impossible. Yet we believe that the full potential of sensor web has thus far not been revealed. In order to harvest the world-wide sensor web's full potential, a geospatial cyberinfrastructure is needed to store, process, and deliver large amount of sensor data collected worldwide. In this paper, we first define the issue of the sensor web long tail followed by our view of the world-wide sensor web architecture. Then, we introduce the Geospatial Cyberinfrastructure for Environmental Sensing (GeoCENS) architecture and explain each of its components. Finally, with demonstration of three real-world powered-by-GeoCENS sensor web applications, we believe that the GeoCENS architecture can successfully address the sensor web long tail issue and consequently realize the world-wide sensor web vision.
GeoCENS: A Geospatial Cyberinfrastructure for the World-Wide Sensor Web
Liang, Steve H.L.; Huang, Chih-Yuan
2013-01-01
The world-wide sensor web has become a very useful technique for monitoring the physical world at spatial and temporal scales that were previously impossible. Yet we believe that the full potential of sensor web has thus far not been revealed. In order to harvest the world-wide sensor web's full potential, a geospatial cyberinfrastructure is needed to store, process, and deliver large amount of sensor data collected worldwide. In this paper, we first define the issue of the sensor web long tail followed by our view of the world-wide sensor web architecture. Then, we introduce the Geospatial Cyberinfrastructure for Environmental Sensing (GeoCENS) architecture and explain each of its components. Finally, with demonstration of three real-world powered-by-GeoCENS sensor web applications, we believe that the GeoCENS architecture can successfully address the sensor web long tail issue and consequently realize the world-wide sensor web vision. PMID:24152921
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
Design and testing of a dual-band enhanced vision system
NASA Astrophysics Data System (ADS)
Way, Scott P.; Kerr, Richard; Imamura, Joseph J.; Arnoldy, Dan; Zeylmaker, Dick; Zuro, Greg
2003-09-01
An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts. It has the ability to provide a single image from uncooled infrared imagers combined with SWIR, NIR or LLLTV sensors. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions but can also be used in a variety of applications where the fusion of dual band or multiband imagery is required. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for the fusion system.
Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations
NASA Astrophysics Data System (ADS)
Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.
2016-04-01
This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).
Vision sensor and dual MEMS gyroscope integrated system for attitude determination on moving base
NASA Astrophysics Data System (ADS)
Guo, Xiaoting; Sun, Changku; Wang, Peng; Huang, Lu
2018-01-01
To determine the relative attitude between the objects on a moving base and the base reference system by a MEMS (Micro-Electro-Mechanical Systems) gyroscope, the motion information of the base is redundant, which must be removed from the gyroscope. Our strategy is to add an auxiliary gyroscope attached to the reference system. The master gyroscope is to sense the total motion, and the auxiliary gyroscope is to sense the motion of the moving base. By a generalized difference method, relative attitude in a non-inertial frame can be determined by dual gyroscopes. With the vision sensor suppressing accumulative drift of the MEMS gyroscope, the vision and dual MEMS gyroscope integration system is formed. Coordinate system definitions and spatial transform are executed in order to fuse inertial and visual data from different coordinate systems together. And a nonlinear filter algorithm, Cubature Kalman filter, is used to fuse slow visual data and fast inertial data together. A practical experimental setup is built up and used to validate feasibility and effectiveness of our proposed attitude determination system in the non-inertial frame on the moving base.
Vision requirements for Space Station applications
NASA Technical Reports Server (NTRS)
Crouse, K. R.
1985-01-01
Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.
Elphic, Richard C.; Feldman, William C.; Funsten, Herbert O.; Prettyman, Thomas H.
2010-01-01
Abstract Orbital neutron spectroscopy has become a standard technique for measuring planetary surface compositions from orbit. While this technique has led to important discoveries, such as the deposits of hydrogen at the Moon and Mars, a limitation is its poor spatial resolution. For omni-directional neutron sensors, spatial resolutions are 1–1.5 times the spacecraft's altitude above the planetary surface (or 40–600 km for typical orbital altitudes). Neutron sensors with enhanced spatial resolution have been proposed, and one with a collimated field of view is scheduled to fly on a mission to measure lunar polar hydrogen. No quantitative studies or analyses have been published that evaluate in detail the detection and sensitivity limits of spatially resolved neutron measurements. Here, we describe two complementary techniques for evaluating the hydrogen sensitivity of spatially resolved neutron sensors: an analytic, closed-form expression that has been validated with Lunar Prospector neutron data, and a three-dimensional modeling technique. The analytic technique, called the Spatially resolved Neutron Analytic Sensitivity Approximation (SNASA), provides a straightforward method to evaluate spatially resolved neutron data from existing instruments as well as to plan for future mission scenarios. We conclude that the existing detector—the Lunar Exploration Neutron Detector (LEND)—scheduled to launch on the Lunar Reconnaissance Orbiter will have hydrogen sensitivities that are over an order of magnitude poorer than previously estimated. We further conclude that a sensor with a geometric factor of ∼ 100 cm2 Sr (compared to the LEND geometric factor of ∼ 10.9 cm2 Sr) could make substantially improved measurements of the lunar polar hydrogen spatial distribution. Key Words: Planetary instrumentation—Planetary science—Moon—Spacecraft experiments—Hydrogen. Astrobiology 10, 183–200. PMID:20298147
Evaluation of novel technologies for the miniaturization of flash imaging lidar
NASA Astrophysics Data System (ADS)
Mitev, V.; Pollini, A.; Haesler, J.; Perenzoni, D.; Stoppa, D.; Kolleck, Christian; Chapuy, M.; Kervendal, E.; Pereira do Carmo, João.
2017-11-01
Planetary exploration constitutes one of the main components in the European Space activities. Missions to Mars, Moon and asteroids are foreseen where it is assumed that the human missions shall be preceded by robotic exploitation flights. The 3D vision is recognised as a key enabling technology in the relative proximity navigation of the space crafts, where imaging LiDAR is one of the best candidates for such 3D vision sensor.
Smart sensing surveillance system
NASA Astrophysics Data System (ADS)
Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen
2010-04-01
An effective public safety sensor system for heavily-populated applications requires sophisticated and geographically-distributed infrastructures, centralized supervision, and deployment of large-scale security and surveillance networks. Artificial intelligence in sensor systems is a critical design to raise awareness levels, improve the performance of the system and adapt to a changing scenario and environment. In this paper, a highly-distributed, fault-tolerant, and energy-efficient Smart Sensing Surveillance System (S4) is presented to efficiently provide a 24/7 and all weather security operation in crowded environments or restricted areas. Technically, the S4 consists of a number of distributed sensor nodes integrated with specific passive sensors to rapidly collect, process, and disseminate heterogeneous sensor data from near omni-directions. These distributed sensor nodes can cooperatively work to send immediate security information when new objects appear. When the new objects are detected, the S4 will smartly select the available node with a Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR camera to track the objects and capture associated imagery. The S4 provides applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. Other imaging processes can be updated to meet specific requirements and operations. In the S4, all the sensor nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology. This UWB RF technology can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The Service Oriented Architecture of S4 enables remote applications to interact with the S4 network and use the specific presentation methods. In addition, the S4 is compliant with Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE) standards to efficiently discover, access, use, and control heterogeneous sensors and their metadata. These S4 capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. The S4 system is directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
Intelligent Sensors: Strategies for an Integrated Systems Approach
NASA Technical Reports Server (NTRS)
Chitikeshi, Sanjeevi; Mahajan, Ajay; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando
2005-01-01
This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).
NASA Astrophysics Data System (ADS)
Hoefflinger, Bernd
Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.
Rueckauer, Bodo; Delbruck, Tobi
2016-01-01
In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639
Development of Sic Gas Sensor Systems
NASA Technical Reports Server (NTRS)
Hunter, G. W.; Neudeck, P. G.; Okojie, R. S.; Beheim, G. M.; Thomas, V.; Chen, L.; Lukco, D.; Liu, C. C.; Ward, B.; Makel, D.
2002-01-01
Silicon carbide (SiC) based gas sensors have significant potential to address the gas sensing needs of aerospace applications such as emission monitoring, fuel leak detection, and fire detection. However, in order to reach that potential, a range of technical challenges must be overcome. These challenges go beyond the development of the basic sensor itself and include the need for viable enabling technologies to make a complete gas sensor system: electrical contacts, packaging, and transfer of information from the sensor to the outside world. This paper reviews the status at NASA Glenn Research Center of SiC Schottky diode gas sensor development as well as that of enabling technologies supporting SiC gas sensor system implementation. A vision of a complete high temperature microfabricated SiC gas sensor system is proposed. In the long-term, it is believed that improvements in the SiC semiconductor material itself could have a dramatic effect on the performance of SiC gas sensor systems.
Robust and efficient method for matching features in omnidirectional images
NASA Astrophysics Data System (ADS)
Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan
2018-04-01
Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.
3D Data Acquisition Platform for Human Activity Understanding
2016-03-02
3D data. The support for the acquisition of such research instrumentation have significantly facilitated our current and future research and educate ...SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and
A Multiple Sensor Machine Vision System Technology for the Hardwood
Richard W. Conners; D.Earl Kline; Philip A. Araman
1995-01-01
For the last few years the authors have been extolling the virtues of a multiple sensor approach to hardwood defect detection. Since 1989 the authors have actively been trying to develop such a system. This paper details some of the successes and failures that have been experienced to date. It also discusses what remains to be done and gives time lines for the...
Always-on low-power optical system for skin-based touchless machine control.
Lecca, Michela; Gottardi, Massimo; Farella, Elisabetta; Milosevic, Bojan
2016-06-01
Embedded vision systems are smart energy-efficient devices that capture and process a visual signal in order to extract high-level information about the surrounding observed world. Thanks to these capabilities, embedded vision systems attract more and more interest from research and industry. In this work, we present a novel low-power optical embedded system tailored to detect the human skin under various illuminant conditions. We employ the presented sensor as a smart switch to activate one or more appliances connected to it. The system is composed of an always-on low-power RGB color sensor, a proximity sensor, and an energy-efficient microcontroller (MCU). The architecture of the color sensor allows a hardware preprocessing of the RGB signal, which is converted into the rg space directly on chip reducing the power consumption. The rg signal is delivered to the MCU, where it is classified as skin or non-skin. Each time the signal is classified as skin, the proximity sensor is activated to check the distance of the detected object. If it appears to be in the desired proximity range, the system detects the interaction and switches on/off the connected appliances. The experimental validation of the proposed system on a prototype shows that processing both distance and color remarkably improves the performance of the two separated components. This makes the system a promising tool for energy-efficient, touchless control of machines.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh
2009-01-01
Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.
Detection of Special Operations Forces Using Night Vision Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, C.M.
2001-10-22
Night vision devices, such image intensifiers and infrared imagers, are readily available to a host of nations, organizations, and individuals through international commerce. Once the trademark of special operations units, these devices are widely advertised to ''turn night into day''. In truth, they cannot accomplish this formidable task, but they do offer impressive enhancement of vision in limited light scenarios through electronically generated images. Image intensifiers and infrared imagers are both electronic devices for enhancing vision in the dark. However, each is based upon a totally different physical phenomenon. Image intensifiers amplify the available light energy whereas infrared imagers detectmore » the thermal energy radiated from all objects. Because of this, each device operates from energy which is present in a different portion of the electromagnetic spectrum. This leads to differences in the ability of each device to detect and/or identify objects. This report is a compilation of the available information on both state-of-the-art image intensifiers and infrared imagers. Image intensifiers developed in the United States, as well as some foreign made image intensifiers, are discussed. Image intensifiers are categorized according to their spectral response and sensitivity using the nomenclature of GEN I, GEN II, and GEN III. As the first generation of image intensifiers, GEN I, were large and of limited performance, this report will deal with only GEN II and GEN III equipment. Infrared imagers are generally categorized according to their spectral response, sensor materials, and related sensor operating temperature using the nomenclature Medium Wavelength Infrared (MWIR) Cooled and Long Wavelength Infrared (LWIR) Uncooled. MWIR Cooled refers to infrared imagers which operate in the 3 to 5 {micro}m wavelength electromagnetic spectral region and require either mechanical or thermoelectric coolers to keep the sensors operating at 77 K. LWIR Uncooled refers to infrared imagers which operate in the 8 to 12 {micro}m wavelength electromagnetic spectral region and do not require cooling below room temperature. Both commercial and military infrared sensors of these two types are discussed.« less
NASA Technical Reports Server (NTRS)
Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)
2002-01-01
A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.
A new type of artificial structure to achieve broadband omnidirectional acoustic absorption
NASA Astrophysics Data System (ADS)
Zheng, Li-Yang; Wu, Ying; Zhang, Xiao-Liu; Ni, Xu; Chen, Ze-Guo; Lu, Ming-Hui; Chen, Yan-Feng
2013-10-01
We present a design for a two-dimensional omnidirectional acoustic absorber that can achieve 98.6% absorption of acoustic waves in water, forming an effective acoustic black hole. This artificial black hole consists of an absorptive core coated with layers of periodically distributed polymer cylinders embedded in water. Effective medium theory describes the response of the coating layers to the acoustic waves. The polymer parameters can be adjusted, allowing practical fabrication of the absorber. Since the proposed structure does not rely on resonances, it is applicable to broad bandwidths. The design might be extended to a variety of applications.
Microphone directionality, pre-emphasis filter, and wind noise in cochlear implants.
Chung, King; McKibben, Nicholas
2011-10-01
Wind noise can be a nuisance or a debilitating masker for cochlear implant users in outdoor environments. Previous studies indicated that wind noise at the microphone/hearing aid output had high levels of low-frequency energy and the amount of noise generated is related to the microphone directionality. Currently, cochlear implants only offer either directional microphones or omnidirectional microphones for users at-large. As all cochlear implants utilize pre-emphasis filters to reduce low-frequency energy before the signal is encoded, effective wind noise reduction algorithms for hearing aids might not be applicable for cochlear implants. The purposes of this study were to investigate the effect of microphone directionality on speech recognition and perceived sound quality of cochlear implant users in wind noise and to derive effective wind noise reduction strategies for cochlear implants. A repeated-measure design was used to examine the effects of spectral and temporal masking created by wind noise recorded through directional and omnidirectional microphones and the effects of pre-emphasis filters on cochlear implant performance. A digital hearing aid was programmed to have linear amplification and relatively flat in-situ frequency responses for the directional and omnidirectional modes. The hearing aid output was then recorded from 0 to 360° at flow velocities of 4.5 and 13.5 m/sec in a quiet wind tunnel. Sixteen postlingually deafened adult cochlear implant listeners who reported to be able to communicate on the phone with friends and family without text messages participated in the study. Cochlear implant users listened to speech in wind noise recorded at locations that the directional and omnidirectional microphones yielded the lowest noise levels. Cochlear implant listeners repeated the sentences and rated the sound quality of the testing materials. Spectral and temporal characteristics of flow noise, as well as speech and/or noise characteristics before and after the pre-emphasis filter, were analyzed. Correlation coefficients between speech recognition scores and crest factors of wind noise before and after pre-emphasis filtering were also calculated. Listeners obtained higher scores using the omnidirectional than the directional microphone mode at 13.5 m/sec, but they obtained similar speech recognition scores for the two microphone modes at 4.5 m/sec. Higher correlation coefficients were obtained between speech recognition scores and crest factors of wind noise after pre-emphasis filtering rather than before filtering. Cochlear implant users would benefit from both directional and omnidirectional microphones to reduce far-field background noise and near-field wind noise. Automatic microphone switching algorithms can be more effective if the incoming signal were analyzed after pre-emphasis filters for microphone switching decisions. American Academy of Audiology.
Welding technology transfer task/laser based weld joint tracking system for compressor girth welds
NASA Technical Reports Server (NTRS)
Looney, Alan
1991-01-01
Sensors to control and monitor welding operations are currently being developed at Marshall Space Flight Center. The laser based weld bead profiler/torch rotation sensor was modified to provide a weld joint tracking system for compressor girth welds. The tracking system features a precision laser based vision sensor, automated two-axis machine motion, and an industrial PC controller. The system benefits are elimination of weld repairs caused by joint tracking errors which reduces manufacturing costs and increases production output, simplification of tooling, and free costly manufacturing floor space.
Using the Optical Mouse Sensor as a Two-Euro Counterfeit Coin Detector
Tresanchez, Marcel; Pallejà, Tomàs; Teixidó, Mercè; Palacín, Jordi
2009-01-01
In this paper, the sensor of an optical mouse is presented as a counterfeit coin detector applied to the two-Euro case. The detection process is based on the short distance image acquisition capabilities of the optical mouse sensor where partial images of the coin under analysis are compared with some partial reference coin images for matching. Results show that, using only the vision sense, the counterfeit acceptance and rejection rates are very similar to those of a trained user and better than those of an untrained user. PMID:22399987
Commercial Sensory Survey Radiation Testing Progress Report
NASA Technical Reports Server (NTRS)
Becker, Heidi N.; Dolphic, Michael D.; Thorbourn, Dennis O.; Alexander, James W.; Salomon, Phil M.
2008-01-01
The NASA Electronic Parts and Packaging (NEPP) Program Sensor Technology Commercial Sensor Survey task is geared toward benefiting future NASA space missions with low-cost, short-duty-cycle, visible imaging needs. Such applications could include imaging for educational outreach purposes or short surveys of spacecraft, planetary, or lunar surfaces. Under the task, inexpensive commercial grade CMOS sensors were surveyed in fiscal year 2007 (FY07) and three sensors were selected for total ionizing dose (TID) and displacement damage dose (DDD) tolerance testing. The selected sensors had to meet selection criteria chosen to support small, low-mass cameras that produce good resolution color images. These criteria are discussed in detail in [1]. This document discusses the progress of radiation testing on the Micron and OmniVision sensors selected in FY07 for radiation tolerance testing.
Qualifications of drivers - vision and diabetes
DOT National Transportation Integrated Search
2011-01-01
San Francisco UPA projects focus on reducing traffic congestion related to parking in downtown San Francisco. Intelligent transportation systems (ITS) technologies underlie many of the San Francisco UPA projects, including parking and roadway sensors...
Automated Data Processing as an AI Planning Problem
NASA Technical Reports Server (NTRS)
Golden, Keith; Pang, Wanlin; Nemani, Ramakrishna; Votava, Petr
2003-01-01
NASA s vision for Earth Science is to build a "sensor web"; an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving his vision will require automation not only in the scheduling of the observations but also in the processing af tee resulting data. Ta address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted.
Spaceborne GPS: Current Status and Future Visions
NASA Technical Reports Server (NTRS)
Bauer, Frank H.; Hartman, Kate; Lightsey, E. Glenn
1998-01-01
The Global Positioning System (GPS), developed by the Department of Defense is quickly revolutionizing the architecture of future spacecraft and spacecraft systems. Significant savings in spacecraft life cycle cost, in power, and in mass can be realized by exploiting GPS technology in spaceborne vehicles. These savings are realized because GPS is a systems sensor--it combines the ability to sense space vehicle trajectory, attitude, time, and relative ranging between vehicles into one package. As a result, a reduced spacecraft sensor complement can be employed and significant reductions in space vehicle operations cost can be realized through enhanced on-board autonomy. This paper provides an overview of the current status of spaceborne GPS, a description of spaceborne GPS receivers available now and in the near future, a description of the 1997-2000 GPS flight experiments, and the spaceborne GPS team's vision for the future.
Landmark-aided localization for air vehicles using learned object detectors
NASA Astrophysics Data System (ADS)
DeAngelo, Mark Patrick
This research presents two methods to localize an aircraft without GPS using fixed landmarks observed from an optical sensor. Onboard absolute localization is useful for vehicle navigation free from an external network. The objective is to achieve practical navigation performance using available autopilot hardware and a downward pointing camera. The first method uses computer vision cascade object detectors, which are trained to detect predetermined, distinct landmarks prior to a flight. The first method also concurrently explores aircraft localization using roads between landmark updates. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement updates when landmarks are detected. The sensor measurements and landmark coordinates extracted from the aircraft's camera images are combined into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities. The second method uses computer vision object detectors to detect abundant generic landmarks referred as buildings, fields, trees, and road intersections from aerial perspectives. Various landmark attributes and spatial relationships to other landmarks are used to help associate observed landmarks with reference landmarks. The computer vision algorithms automatically extract reference landmarks from maps, which are processed offline before a flight. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement corrections by processing aerial photos with similar generic landmark detection techniques. The method also combines sensor measurements and landmark coordinates into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Vision-based semi-autonomous outdoor robot system to reduce soldier workload
NASA Astrophysics Data System (ADS)
Richardson, Al; Rodgers, Michael H.
2001-09-01
Sensors and computational capability have not reached the point to enable small robots to navigate autonomously in unconstrained outdoor environments at tactically useful speeds. This problem is greatly reduced, however, if a soldier can lead the robot through terrain that he knows it can traverse. An application of this concept is a small pack-mule robot that follows a foot soldier over outdoor terrain. The solder would be responsible to avoid situations beyond the robot's limitations when encountered. Having learned the route, the robot could autonomously retrace the path carrying supplies and munitions. This would greatly reduce the soldier's workload under normal conditions. This paper presents a description of a developmental robot sensor system using low-cost commercial 3D vision and inertial sensors to address this application. The robot moves at fast walking speed and requires only short-range perception to accomplish its task. 3D-feature information is recorded on a composite route map that the robot uses to negotiate its local environment and retrace the path taught by the soldier leader.
DARPA super resolution vision system (SRVS) robust turbulence data collection and analysis
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Thompson, Roger; Tofsted, David; D'Arcy, Sean
2014-05-01
Atmospheric turbulence degrades the range performance of military imaging systems, specifically those intended for long range, ground-to-ground target identification. The recent Defense Advanced Research Projects Agency (DARPA) Super Resolution Vision System (SRVS) program developed novel post-processing system components to mitigate turbulence effects on visible and infrared sensor systems. As part of the program, the US Army RDECOM CERDEC NVESD and the US Army Research Laboratory Computational & Information Sciences Directorate (CISD) collaborated on a field collection and atmospheric characterization of a two-handed weapon identification dataset through a diurnal cycle for a variety of ranges and sensor systems. The robust dataset is useful in developing new models and simulations of turbulence, as well for providing as a standard baseline for comparison of sensor systems in the presence of turbulence degradation and mitigation. In this paper, we describe the field collection and atmospheric characterization and present the robust dataset to the defense, sensing, and security community. In addition, we present an expanded model validation of turbulence degradation using the field collected video sequences.
Wireless sensor systems for sense/decide/act/communicate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Cushner, Adam; Baker, James A.
2003-12-01
After 9/11, the United States (U.S.) was suddenly pushed into challenging situations they could no longer ignore as simple spectators. The War on Terrorism (WoT) was suddenly ignited and no one knows when this war will end. While the government is exploring many existing and potential technologies, the area of wireless Sensor networks (WSN) has emerged as a foundation for establish future national security. Unlike other technologies, WSN could provide virtual presence capabilities needed for precision awareness and response in military, intelligence, and homeland security applications. The Advance Concept Group (ACG) vision of Sense/Decide/Act/Communicate (SDAC) sensor system is an instantiationmore » of the WSN concept that takes a 'systems of systems' view. Each sensing nodes will exhibit the ability to: Sense the environment around them, Decide as a collective what the situation of their environment is, Act in an intelligent and coordinated manner in response to this situational determination, and Communicate their actions amongst each other and to a human command. This LDRD report provides a review of the research and development done to bring the SDAC vision closer to reality.« less
Information theory analysis of sensor-array imaging systems for computer vision
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.
1983-01-01
Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
Vision-Based Sensor for Early Detection of Periodical Defects in Web Materials
Bulnes, Francisco G.; Usamentiaga, Rubén; García, Daniel F.; Molleda, Julio
2012-01-01
During the production of web materials such as plastic, textiles or metal, where there are rolls involved in the production process, periodically generated defects may occur. If one of these rolls has some kind of flaw, it can generate a defect on the material surface each time it completes a full turn. This can cause the generation of a large number of surface defects, greatly degrading the product quality. For this reason, it is necessary to have a system that can detect these situations as soon as possible. This paper presents a vision-based sensor for the early detection of this kind of defects. It can be adapted to be used in the inspection of any web material, even when the input data are very noisy. To assess its performance, the sensor system was used to detect periodical defects in hot steel strips. A total of 36 strips produced in ArcelorMittal Avilés factory were used for this purpose, 18 to determine the optimal configuration of the proposed sensor using a full-factorial experimental design and the other 18 to verify the validity of the results. Next, they were compared with those provided by a commercial system used worldwide, showing a clear improvement. PMID:23112629
Single-Photon Detectors for Time-of-Flight Range Imaging
NASA Astrophysics Data System (ADS)
Stoppa, David; Simoni, Andrea
We live in a three-dimensional (3D) world and thanks to the stereoscopic vision provided by our two eyes, in combination with the powerful neural network of the brain we are able to perceive the distance of the objects. Nevertheless, despite the huge market volume of digital cameras, solid-state image sensors can capture only a two-dimensional (2D) projection, of the scene under observation, losing a variable of paramount importance, i.e., the scene depth. On the contrary, 3D vision tools could offer amazing possibilities of improvement in many areas thanks to the increased accuracy and reliability of the models representing the environment. Among the great variety of distance measuring techniques and detection systems available, this chapter will treat only the emerging niche of solid-state, scannerless systems based on the TOF principle and using a detector SPAD-based pixels. The chapter is organized into three main parts. At first, TOF systems and measuring techniques will be described. In the second part, most meaningful sensor architectures for scannerless TOF distance measurements will be analyzed, focusing onto the circuital building blocks required by time-resolved image sensors. Finally, a performance summary is provided and a perspective view for the near future developments of SPAD-TOF sensors is given.
Machine vision guided sensor positioning system for leaf temperature assessment
NASA Technical Reports Server (NTRS)
Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)
2001-01-01
A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.
Forest Connectivity Regions of Canada Using Circuit Theory and Image Analysis
Pelletier, David; Lapointe, Marc-Élie; Wulder, Michael A.; White, Joanne C.; Cardille, Jeffrey A.
2017-01-01
Ecological processes are increasingly well understood over smaller areas, yet information regarding interconnections and the hierarchical nature of ecosystems remains less studied and understood. Information on connectivity over large areas with high resolution source information provides for both local detail and regional context. The emerging capacity to apply circuit theory to create maps of omnidirectional connectivity provides an opportunity for improved and quantitative depictions of forest connectivity, supporting the formation and testing of hypotheses about the density of animal movement, ecosystem structure, and related links to natural and anthropogenic forces. In this research, our goal was to delineate regions where connectivity regimes are similar across the boreal region of Canada using new quantitative analyses for characterizing connectivity over large areas (e.g., millions of hectares). Utilizing the Earth Observation for Sustainable Development of forests (EOSD) circa 2000 Landsat-derived land-cover map, we created and analyzed a national-scale map of omnidirectional forest connectivity at 25m resolution over 10000 tiles of 625 km2 each, spanning the forested regions of Canada. Using image recognition software to detect corridors, pinch points, and barriers to movements at multiple spatial scales in each tile, we developed a simple measure of the structural complexity of connectivity patterns in omnidirectional connectivity maps. We then mapped the Circuitscape resistance distance measure and used it in conjunction with the complexity data to study connectivity characteristics in each forested ecozone. Ecozone boundaries masked substantial systematic patterns in connectivity characteristics that are uncovered using a new classification of connectivity patterns that revealed six clear groups of forest connectivity patterns found in Canada. The resulting maps allow exploration of omnidirectional forest connectivity patterns at full resolution while permitting quantitative analyses of connectivity over broad areas, informing modeling, planning and monitoring efforts. PMID:28146573
NASA Technical Reports Server (NTRS)
Ettinger, Scott M.; Nechyba, Michael C.; Ifju, Peter G.; Wazak, Martin
2002-01-01
Substantial progress has been made recently towards design building and test-flying remotely piloted Micro Air Vehicle's (MAVs). We seek to complement this progress in overcoming the aerodynamic obstacles to.flight at very small scales with a vision stability and autonomy system. The developed system based on a robust horizon detection algorithm which we discuss in greater detail in a companion paper. In this paper, we first motivate the use of computer vision for MAV autonomy arguing that given current sensor technology, vision may he the only practical approach to the problem. We then briefly review our statistical vision-based horizon detection algorithm, which has been demonstrated at 30Hz with over 99.9% correct horizon identification. Next we develop robust schemes for the detection of extreme MAV attitudes, where no horizon is visible, and for the detection of horizon estimation errors, due to external factors such as video transmission noise. Finally, we discuss our feed-back controller for self-stabilized flight, and report results on vision autonomous flights of duration exceeding ten minutes.
Robot path planning using expert systems and machine vision
NASA Astrophysics Data System (ADS)
Malone, Denis E.; Friedrich, Werner E.
1992-02-01
This paper describes a system developed for the robotic processing of naturally variable products. In order to plan the robot motion path it was necessary to use a sensor system, in this case a machine vision system, to observe the variations occurring in workpieces and interpret this with a knowledge based expert system. The knowledge base was acquired by carrying out an in-depth study of the product using examination procedures not available in the robotic workplace and relates the nature of the required path to the information obtainable from the machine vision system. The practical application of this system to the processing of fish fillets is described and used to illustrate the techniques.
Modeling and Simulation of Microelectrode-Retina Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckerman, M
2002-11-30
The goal of the retinal prosthesis project is the development of an implantable microelectrode array that can be used to supply visually-driven electrical input to cells in the retina, bypassing nonfunctional rod and cone cells, thereby restoring vision to blind individuals. This goal will be achieved through the study of the fundamentals of electrical engineering, vision research, and biomedical engineering with the aim of acquiring the knowledge needed to engineer a high-density microelectrode-tissue hybrid sensor that will restore vision to millions of blind persons. The modeling and simulation task within this project is intended to address the question how bestmore » to stimulate, and communicate with, cells in the retina using implanted microelectrodes.« less
Survey of computer vision-based natural disaster warning systems
NASA Astrophysics Data System (ADS)
Ko, ByoungChul; Kwak, Sooyeong
2012-07-01
With the rapid development of information technology, natural disaster prevention is growing as a new research field dealing with surveillance systems. To forecast and prevent the damage caused by natural disasters, the development of systems to analyze natural disasters using remote sensing geographic information systems (GIS), and vision sensors has been receiving widespread interest over the last decade. This paper provides an up-to-date review of five different types of natural disasters and their corresponding warning systems using computer vision and pattern recognition techniques such as wildfire smoke and flame detection, water level detection for flood prevention, coastal zone monitoring, and landslide detection. Finally, we conclude with some thoughts about future research directions.
Rethinking GIS Towards The Vision Of Smart Cities Through CityGML
NASA Astrophysics Data System (ADS)
Guney, C.
2016-10-01
Smart cities present a substantial growth opportunity in the coming years. The role of GIS in the smart city ecosystem is to integrate different data acquired by sensors in real time and provide better decisions, more efficiency and improved collaboration. Semantically enriched vision of GIS will help evolve smart cities into tomorrow's much smarter cities since geospatial/location data and applications may be recognized as a key ingredient of smart city vision. However, it is need for the Geospatial Information communities to debate on "Is 3D Web and mobile GIS technology ready for smart cities?" This research places an emphasis on the challenges of virtual 3D city models on the road to smarter cities.
Vision based techniques for rotorcraft low altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Suorsa, Ray; Smith, Philip
1991-01-01
An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.
2006-07-27
9 10 Technical horizon sensors Over the past few years, a remarkable proliferation of designs for micro-aerial vehicles (MAVs) has occurred... photodiode Fig. 15 Fig. 14 Sky scans with a GaP UV pho to dio de a lo ng three vert ical paths. A ngle o f v iew 30 degrees, 50% clo ud co ver, sun at...Australia Email: gert.stange@anu.edu.au A biomimetic algorithm for flight stabilization in airborne vehicles , based on dragonfly ocellar vision
Multispectral Image Processing for Plants
NASA Technical Reports Server (NTRS)
Miles, Gaines E.
1991-01-01
The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.
The Hunter-Killer Model, Version 2.0. User’s Manual.
1986-12-01
Contract No. DAAK21-85-C-0058 Prepared for The Center for Night Vision and Electro - Optics DELNV-V Fort Belvoir, Virginia 22060 This document has been...INQUIRIES Inquiries concerning the Hunter-Killer Model or the Hunter-Killer Database System should be addressed to: 1-1 I The Night Vision and Electro - Optics Center...is designed and constructed to study the performance of electro - optic sensor systems in a combat scenario. The model simulates a two-sided battle
Bio-Inspired Sensing and Imaging of Polarization Information in Nature
2008-05-04
polarization imaging,” Appl. Opt. 36, 150–155 (1997). 5. L. B. Wolff, “Polarization camera for computer vision with a beam splitter ,” J. Opt. Soc. Am. A...vision with a beam splitter ,” J. Opt. Soc. Am. A 11, 2935–2945 (1994). 2. L. B. Wolff and A. G. Andreou, “Polarization camera sensors,” Image Vis. Comput...group we have been developing various man-made, non -invasive imaging methodologies, sensing schemes, camera systems, and visualization and display
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Simple laser vision sensor calibration for surface profiling applications
NASA Astrophysics Data System (ADS)
Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.
2016-09-01
Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.
Perception for mobile robot navigation: A survey of the state of the art
NASA Technical Reports Server (NTRS)
Kortenkamp, David
1994-01-01
In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.
Soga, Kenichi; Schooling, Jennifer
2016-08-06
Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors.
Sensor networks in the low lands.
Meratnia, Nirvana; van der Zwaag, Berend Jan; van Dijk, Hylke W; Bijwaard, Dennis J A; Havinga, Paul J M
2010-01-01
This paper provides an overview of scientific and industrial developments of the last decade in the area of sensor networks in The Netherlands (Low Lands). The goal is to highlight areas in which the Netherlands has made most contributions and is currently a dominant player in the field of sensor networks. On the one hand, motivations, addressed topics, and initiatives taken in this period are presented, while on the other hand, special emphasis is given to identifying current and future trends and formulating a vision for the coming five to ten years. The presented overview and trend analysis clearly show that Dutch research and industrial efforts, in line with recent worldwide developments in the field of sensor technology, present a clear shift from sensor node platforms, operating systems, communication, networking, and data management aspects of the sensor networks to reasoning/cognition, control, and actuation.
Soga, Kenichi; Schooling, Jennifer
2016-01-01
Design, construction, maintenance and upgrading of civil engineering infrastructure requires fresh thinking to minimize use of materials, energy and labour. This can only be achieved by understanding the performance of the infrastructure, both during its construction and throughout its design life, through innovative monitoring. Advances in sensor systems offer intriguing possibilities to radically alter methods of condition assessment and monitoring of infrastructure. In this paper, it is hypothesized that the future of infrastructure relies on smarter information; the rich information obtained from embedded sensors within infrastructure will act as a catalyst for new design, construction, operation and maintenance processes for integrated infrastructure systems linked directly with user behaviour patterns. Some examples of emerging sensor technologies for infrastructure sensing are given. They include distributed fibre-optics sensors, computer vision, wireless sensor networks, low-power micro-electromechanical systems, energy harvesting and citizens as sensors. PMID:27499845
Meandered conformal antenna for ISM-band ingestible capsule communication systems.
Arefin, Md Shamsul; Redoute, Jean-Michel; Yuce, Mehmet Rasit
2016-08-01
The wireless capsule has been used to measure physiological parameters in the gastrointestinal tract where communication from in-body to external receiver is necessary using a miniaturized antenna with high gain and onmidirectional radiation pattern. This paper presents a meandered conformal antenna with center frequency of 433 MHz for a wireless link between an in-body capsule system and an ex-body receiver system. The antenna is wrapped around the wireless capsule, which provides extra space for other circuits and sensors inside the capsule as well as allows it having larger dimensions compared to inner antennas. This paper analyses return loss, radiation pattern, antenna gain, and propagation loss using pork as the gastrointestinal tissue simulating medium. From the radiation pattern and return loss results, the antenna shows an omni-directional radiation pattern and an ultrawide bandwidth of 124.4 MHz (371.6 to 496 MHz) for VSWR <; 2. Experimental results shows that the path loss is 17.24 dB for an in-body propagation distance of 140 mm.
Rahman, MuhibUr; Park, Jung-Dong
2018-03-19
In this paper, we present the smallest form factor microstrip-fed ultra-wideband antenna with quintuple rejection bands for use in wireless sensor networks, mobile handsets, and Internet of things (IoT). Five rejection bands have been achieved at the frequencies of 3.5, 4.5, 5.25, 5.7, and 8.2 GHz, inseminating four rectangular complementary split ring resonators (RCSRRs) on the radiating patch and placing two rectangular split-ring resonators (RSRR) near the feedline-patch junction of the conventional ultra-wideband (UWB) antenna. The design guidelines of the implemented notched bands are provided at the desired frequency bands and analyzed. The measured results demonstrate that the proposed antenna delivers a wide impedance bandwidth from 3 to 11 GHz with a nearly omnidirectional radiation pattern, high rejection in the multiple notched-bands, and good radiation efficiency over the entire frequency band except at the notched frequencies. Simulated and measured response match well specifically at the stop-bands.
Energetic particle diffusion and the A ring: Revisiting noise from Cassini's orbital insertion
NASA Astrophysics Data System (ADS)
Crary, Frank; Kollmann, Peter
2016-04-01
Immediately following Cassini's orbital insertion on July 1, 2004 the Cassini spacecraft passed over the Saturn's main rings. In anticipation of the final phase of the Cassini mission, with orbits inside and over the main rings, we have re-examined data from the CAPS instrument taken during the orbital insertion period. One previously-neglected feature is the detector noise in the ELS sensor. This has proven to be a sensitive, relative measure of omni-directional energetic (>5 MeV) electron flux. The data are obtained at 31.25 ms time resolution, corresponding to 0.46 km spatial resolution. Over the A ring, the energetic electron flux was essentially zero (~3 counts per sample.) At the edge of the A ring, this dramatically increased to approximately 2500 counts per sample in the space of 17.5 km. We use these results to derive the energetic particle diffusion rate and the absorption (optical depth) of the ring.
2018-01-01
In this paper, we present the smallest form factor microstrip-fed ultra-wideband antenna with quintuple rejection bands for use in wireless sensor networks, mobile handsets, and Internet of things (IoT). Five rejection bands have been achieved at the frequencies of 3.5, 4.5, 5.25, 5.7, and 8.2 GHz, inseminating four rectangular complementary split ring resonators (RCSRRs) on the radiating patch and placing two rectangular split-ring resonators (RSRR) near the feedline-patch junction of the conventional ultra-wideband (UWB) antenna. The design guidelines of the implemented notched bands are provided at the desired frequency bands and analyzed. The measured results demonstrate that the proposed antenna delivers a wide impedance bandwidth from 3 to 11 GHz with a nearly omnidirectional radiation pattern, high rejection in the multiple notched-bands, and good radiation efficiency over the entire frequency band except at the notched frequencies. Simulated and measured response match well specifically at the stop-bands. PMID:29562714
Electro-optical muzzle flash detection
NASA Astrophysics Data System (ADS)
Krieg, Jürgen; Eisele, Christian; Seiffer, Dirk
2016-10-01
Localizing a shooter in a complex scenario is a difficult task. Acoustic sensors can be used to detect blast waves. Radar technology permits detection of the projectile. A third method is to detect the muzzle flash using electro-optical devices. Detection of muzzle flash events is possible with focal plane arrays, line and single element detectors. In this paper, we will show that the detection of a muzzle flash works well in the shortwave infrared spectral range. Important for the acceptance of an operational warning system in daily use is a very low false alarm rate. Using data from a detector with a high sampling rate the temporal signature of a potential muzzle flash event can be analyzed and the false alarm rate can be reduced. Another important issue is the realization of an omnidirectional view required on an operational level. It will be shown that a combination of single element detectors and simple optics in an appropriate configuration is a capable solution.
Lazarova, Katerina; Awala, Hussein; Thomas, Sebastien; Vasileva, Marina; Mintova, Svetlana; Babeva, Tsvetanka
2014-01-01
The preparation of responsive multilayered structures with quarter-wave design based on layer-by-layer deposition of sol-gel derived Nb2O5 films and spin-coated MEL type zeolite is demonstrated. The refractive indices (n) and thicknesses (d) of the layers are determined using non-linear curve fitting of the measured reflectance spectra. Besides, the surface and cross-sectional features of the multilayered structures are characterized by scanning electron microscopy (SEM). The quasi-omnidirectional photonic band for the multilayered structures is predicted theoretically, and confirmed experimentally by reflectance measurements at oblique incidence with polarized light. The sensing properties of the multilayered structures toward acetone are studied by measuring transmittance spectra prior and after vapor exposure. Furthermore, the potential of the one-dimensional photonic crystals based on the multilayered structure consisting of Nb2O5 and MEL type zeolite as a chemical sensor with optical read-out is discussed. PMID:25010695
Omnidirectional antenna having constant phase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sena, Matthew
Various technologies presented herein relate to constructing and/or operating an antenna having an omnidirectional electrical field of constant phase. The antenna comprises an upper plate made up of multiple conductive rings, a lower ground-plane plate, a plurality of grounding posts, a conical feed, and a radio frequency (RF) feed connector. The upper plate has a multi-ring configuration comprising a large outer ring and several smaller rings of equal size located within the outer ring. The large outer ring and the four smaller rings have the same cross-section. The grounding posts ground the upper plate to the lower plate while maintainingmore » a required spacing/parallelism therebetween.« less
NASA Technical Reports Server (NTRS)
Blumrich, J. F. (Inventor)
1974-01-01
The apparatus consists of a wheel having a hub with radially disposed spokes which are provided with a plurality of circumferential rim segments. These rim segments carry, between the spokes, rim elements which are rigid relative to their outer support surfaces, and defined in their outer contour to form a part of the circle forming the wheel diameter. The rim segments have provided for each of the rim elements an independent drive means selectively operable when the element is in ground contact to rotatably drive the rim element in a direction of movement perpendicularly lateral to the normal plane of rotation and movement of the wheel. This affords the wheel omnidirectional movement.
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Chen, Alexander Y. K.
1991-01-01
Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.
2009-09-01
capable of surviving the high-temperature, high- vibration environment of a jet engine. Active control spans active surge/stall control and three...other closely related areas, viz., active combustion control (references 21-22), active noise control, and active vibration control. All of these are...self-powered sensors that harvest energy from engine heat or vibrations replace sensors that require power. The long-term vision is one of a
Helmet-Mounted Displays: Sensation, Perception and Cognition Issues
2009-01-01
Inc., web site: http://www.metavr.com/ technology/ papers /syntheticvision.html Helmetag, A., Halbig, C., Kubbat, W., and Schmidt, R. (1999...system-of-systems.” One integral system is a “head-borne vision enhancement” system (an HMD) that provides fused I2/ IR sensor imagery (U.S. Army Natick...Using microwave, radar, I2, infrared ( IR ), and other technology-based imaging sensors, the “seeing” range of the human eye is extended into the
Real-time motion artifacts compensation of ToF sensors data on GPU
NASA Astrophysics Data System (ADS)
Lefloch, Damien; Hoegg, Thomas; Kolb, Andreas
2013-05-01
Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
NASA Astrophysics Data System (ADS)
Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki
We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.
A neighbor pixel communication filtering structure for Dynamic Vision Sensors
NASA Astrophysics Data System (ADS)
Xu, Yuan; Liu, Shiqi; Lu, Hehui; Zhang, Zilong
2017-02-01
For Dynamic Vision Sensors (DVS), thermal noise and junction leakage current induced Background Activity (BA) is the major cause of the deterioration of images quality. Inspired by the smoothing filtering principle of horizontal cells in vertebrate retina, A DVS pixel with Neighbor Pixel Communication (NPC) filtering structure is proposed to solve this issue. The NPC structure is designed to judge the validity of pixel's activity through the communication between its 4 adjacent pixels. The pixel's outputs will be suppressed if its activities are determined not real. The proposed pixel's area is 23.76×24.71μm2 and only 3ns output latency is introduced. In order to validate the effectiveness of the structure, a 5×5 pixel array has been implemented in SMIC 0.13μm CIS process. 3 test cases of array's behavioral model show that the NPC-DVS have an ability of filtering the BA.
Spaceborne GPS Current Status and Future Visions
NASA Technical Reports Server (NTRS)
Bauer, Frank H.; Hartman, Kate; Lightsey, E. Glenn
1998-01-01
The Global Positioning System (GPS), developed by the Department of Defense, is quickly revolutionizing the architecture of future spacecraft and spacecraft systems. Significant savings in spacecraft life cycle cost, in power, and in mass can be realized by exploiting Global Positioning System (GPS) technology in spaceborne vehicles. These savings are realized because GPS is a systems sensor-it combines the ability to sense space vehicle trajectory, attitude, time, and relative ranging between vehicles into one package. As a result, a reduced spacecraft sensor complement can be employed on spacecraft and significant reductions in space vehicle operations cost can be realized through enhanced on- board autonomy. This paper provides an overview of the current status of spaceborne GPS, a description of spaceborne GPS receivers available now and in the near future, a description of the 1997-1999 GPS flight experiments and the spaceborne GPS team's vision for the future.
Application of ultrasonic sensor for measuring distances in robotics
NASA Astrophysics Data System (ADS)
Zhmud, V. A.; Kondratiev, N. O.; Kuznetsov, K. A.; Trubin, V. G.; Dimitrov, L. V.
2018-05-01
Ultrasonic sensors allow us to equip robots with a means of perceiving surrounding objects, an alternative to technical vision. Humanoid robots, like robots of other types, are, first, equipped with sensory systems similar to the senses of a human. However, this approach is not enough. All possible types and kinds of sensors should be used, including those that are similar to those of other animals and creations (in particular, echolocation in dolphins and bats), as well as sensors that have no analogues in the wild. This paper discusses the main issues that arise when working with the HC-SR04 ultrasound rangefinder based on the STM32VLDISCOVERY evaluation board. The characteristics of similar modules for comparison are given. A subroutine for working with the sensor is given.
Separation of presampling and postsampling modulation transfer functions in infrared sensor systems
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Olson, Jeffrey T.; O'Shea, Patrick D.; Hodgkin, Van A.; Jacobs, Eddie L.
2006-05-01
New methods of measuring the modulation transfer function (MTF) of electro-optical sensor systems are investigated. These methods are designed to allow the separation and extraction of presampling and postsampling components from the total system MTF. The presampling MTF includes all the effects prior to the sampling stage of the imaging process, such as optical blur and detector shape. The postsampling MTF includes all the effects after sampling, such as interpolation filters and display characteristics. Simulation and laboratory measurements are used to assess the utility of these techniques. Knowledge of these components and inclusion into sensor models, such as the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's NVThermIP, will allow more accurate modeling and complete characterization of sensor performance.
Knowledge Management in Sensor Enabled Online Services
NASA Astrophysics Data System (ADS)
Smyth, Dominick; Cappellari, Paolo; Roantree, Mark
The Future Internet, has as its vision, the development of improved features and usability for services, applications and content. In many cases, services can be provided automatically through the use of monitors or sensors. This means web generated sensor data becoming available not only to the companies that own the sensors but also to the domain users who generate the data and to information and knowledge workers who harvest the output. The goal is improving the service through better usage of the information provided by the service. Applications and services vary from climate, traffic, health and sports event monitoring. In this paper, we present the WSW system that harvests web sensor data to provide additional and, in some cases, more accurate information using an analysis of both live and warehoused information.
Vector disparity sensor with vergence control for active vision systems.
Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P; Ros, Eduardo
2012-01-01
This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system.
Vector Disparity Sensor with Vergence Control for Active Vision Systems
Barranco, Francisco; Diaz, Javier; Gibaldi, Agostino; Sabatini, Silvio P.; Ros, Eduardo
2012-01-01
This paper presents an architecture for computing vector disparity for active vision systems as used on robotics applications. The control of the vergence angle of a binocular system allows us to efficiently explore dynamic environments, but requires a generalization of the disparity computation with respect to a static camera setup, where the disparity is strictly 1-D after the image rectification. The interaction between vision and motor control allows us to develop an active sensor that achieves high accuracy of the disparity computation around the fixation point, and fast reaction time for the vergence control. In this contribution, we address the development of a real-time architecture for vector disparity computation using an FPGA device. We implement the disparity unit and the control module for vergence, version, and tilt to determine the fixation point. In addition, two on-chip different alternatives for the vector disparity engines are discussed based on the luminance (gradient-based) and phase information of the binocular images. The multiscale versions of these engines are able to estimate the vector disparity up to 32 fps on VGA resolution images with very good accuracy as shown using benchmark sequences with known ground-truth. The performances in terms of frame-rate, resource utilization, and accuracy of the presented approaches are discussed. On the basis of these results, our study indicates that the gradient-based approach leads to the best trade-off choice for the integration with the active vision system. PMID:22438737
Sensor Control of Robot Arc Welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1983-01-01
The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.
2003-01-22
ProVision Technologies, a NASA research partnership center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. Health-related applications of HSI include scanning chickens during processing to help prevent contaminated food from getting to the table. ProVision is working with Sanderson Farms of Mississippi and the U.S. Department of Agriculture. ProVision has a record in its spectral library of the unique spectral signature of fecal contamination, so chickens can be scanned and those with a positive reading can be separated. HSI sensors can also determine the quantity of surface contamination. Research in this application is quite advanced, and ProVision is working on a licensing agreement for the technology. The potential for future use of this equipment in food processing and food safety is enormous.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
Hyperspectral Imaging of fecal contamination on chickens
NASA Technical Reports Server (NTRS)
2003-01-01
ProVision Technologies, a NASA research partnership center at Sternis Space Center in Mississippi, has developed a new hyperspectral imaging (HSI) system that is much smaller than the original large units used aboard remote sensing aircraft and satellites. The new apparatus is about the size of a breadbox. Health-related applications of HSI include scanning chickens during processing to help prevent contaminated food from getting to the table. ProVision is working with Sanderson Farms of Mississippi and the U.S. Department of Agriculture. ProVision has a record in its spectral library of the unique spectral signature of fecal contamination, so chickens can be scanned and those with a positive reading can be separated. HSI sensors can also determine the quantity of surface contamination. Research in this application is quite advanced, and ProVision is working on a licensing agreement for the technology. The potential for future use of this equipment in food processing and food safety is enormous.
NASA Astrophysics Data System (ADS)
Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.
2016-03-01
Most photoacoustic scanners use piezoelectric detectors but these have two key limitations. Firstly, they are optically opaque, inhibiting backward mode operation. Secondly, it is difficult to achieve adequate detection sensitivity with the small element sizes needed to provide near-omnidirectional response as required for tomographic imaging. Planar Fabry-Perot (FP) ultrasound sensing etalons can overcome both of these limitations and have proved extremely effective for superficial (<1cm) imaging applications. To achieve small element sizes (<100μm), the etalon is illuminated with a focused laser beam. However, this has the disadvantage that beam walk-off due to the divergence of the beam fundamentally limits the etalon finesse and thus sensitivity - in essence, the problem is one of insufficient optical confinement. To overcome this, novel planoconcave micro-resonator sensors have been fabricated using precision ink-jet printed polymer domes with curvatures matching that of the laser wavefront. By providing near-perfect beam confinement, we show that it is possible to approach the maximum theoretical limit for finesse (f) imposed by the etalon mirror reflectivities (e.g. f=400 for R=99.2% in contrast to a typical planar sensor value of f<50). This yields an order of magnitude increase in sensitivity over a planar FP sensor with the same acoustic bandwidth. Furthermore by eliminating beam walk-off, viable sensors can be made with significantly greater thickness than planar FP sensors. This provides an additional sensitivity gain for deep tissue imaging applications such as breast imaging where detection bandwidths in the low MHz can be tolerated. For example, for a 250 μm thick planoconcave sensor with a -3dB bandwidth of 5MHz, the measured NEP was 4 Pa. This NEP is comparable to that provided by mm scale piezoelectric detectors used for breast imaging applications but with more uniform frequency response characteristics and an order-of-magnitude smaller element size. Following previous proof-of-concept work, several important advances towards practical application have been made. A family of sensors with bandwidths ranging from 3MHz to 20MHz have been fabricated and characterised. A novel interrogation scheme based on rapid wavelength sweeping has been implemented in order to avoid previously encountered instability problems due to self-heating. Finally, a prototype microresonator based photoacoustic scanner has been developed and applied to the problem of deep-tissue (>1cm) photoacoustic imaging in vivo. Imaging results for second generation microresonator sensors (with R = 99.5% and thickness up to ~800um) are compared to the best achievable with the planar FP sensors and piezoelectric receivers.
NASA Astrophysics Data System (ADS)
Mahajan, Ajay; Chitikeshi, Sanjeevi; Utterbach, Lucas; Bandhil, Pavan; Figueroa, Fernando
2006-05-01
This paper describes the application of intelligent sensors in the Integrated Systems Health Monitoring (ISHM) as applied to a rocket test stand. The development of intelligent sensors is attempted as an integrated system approach, i.e. one treats the sensors as a complete system with its own physical transducer, A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements associated with the rocket tests stands. These smart elements can be sensors, actuators or other devices. Though the immediate application is the monitoring of the rocket test stands, the technology should be generally applicable to the ISHM vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent sensors (PIS) and Virtual Intelligent Sensors (VIS).
Pervasive Monitoring—An Intelligent Sensor Pod Approach for Standardised Measurement Infrastructures
Resch, Bernd; Mittlboeck, Manfred; Lippautz, Michael
2010-01-01
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a “digital skin for planet earth”. The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making. PMID:22163537
Resch, Bernd; Mittlboeck, Manfred; Lippautz, Michael
2010-01-01
Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a "digital skin for planet earth". The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making.
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-01-01
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849
Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel
2014-08-19
The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.
Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors
Everding, Lukas; Conradt, Jörg
2018-01-01
In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386
A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever
NASA Technical Reports Server (NTRS)
Magee, Michael
1993-01-01
The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities.
Choi, Insub; Kim, JunHee; Kim, Donghyun
2016-12-08
Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.
Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe
2017-10-16
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.
A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application
Vassallo, Raquel
2017-01-01
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334
Zhao, Bo; Ding, Ruoxi; Chen, Shoushun; Linares-Barranco, Bernabe; Tang, Huajin
2015-09-01
This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.
Fast Markerless Tracking for Augmented Reality in Planar Environment
NASA Astrophysics Data System (ADS)
Basori, Ahmad Hoirul; Afif, Fadhil Noer; Almazyad, Abdulaziz S.; AbuJabal, Hamza Ali S.; Rehman, Amjad; Alkawaz, Mohammed Hazim
2015-12-01
Markerless tracking for augmented reality should not only be accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current reported methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to available feature-based approaches. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22.97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.
Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI
Serrano, Miguel Ángel; Gómez-Romero, Juan; Patricio, Miguel Ángel; García, Jesús; Molina, José Manuel
2012-01-01
Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI) environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors' knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP) application for the elaboration of live market researches.
A Bionic Polarization Navigation Sensor and Its Calibration Method.
Zhao, Huijie; Xu, Wujian
2016-08-03
The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects' polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor's signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation.
Sensor Networks in the Low Lands
Meratnia, Nirvana; van der Zwaag, Berend Jan; van Dijk, Hylke W.; Bijwaard, Dennis J. A.; Havinga, Paul J. M.
2010-01-01
This paper provides an overview of scientific and industrial developments of the last decade in the area of sensor networks in The Netherlands (Low Lands). The goal is to highlight areas in which the Netherlands has made most contributions and is currently a dominant player in the field of sensor networks. On the one hand, motivations, addressed topics, and initiatives taken in this period are presented, while on the other hand, special emphasis is given to identifying current and future trends and formulating a vision for the coming five to ten years. The presented overview and trend analysis clearly show that Dutch research and industrial efforts, in line with recent worldwide developments in the field of sensor technology, present a clear shift from sensor node platforms, operating systems, communication, networking, and data management aspects of the sensor networks to reasoning/cognition, control, and actuation. PMID:22163669
Latency in Visionic Systems: Test Methods and Requirements
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.
2005-01-01
A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.
Cognitive radio wireless sensor networks: applications, challenges and research trends.
Joshi, Gyanendra Prasad; Nam, Seung Yeob; Kim, Sung Won
2013-08-22
A cognitive radio wireless sensor network is one of the candidate areas where cognitive techniques can be used for opportunistic spectrum access. Research in this area is still in its infancy, but it is progressing rapidly. The aim of this study is to classify the existing literature of this fast emerging application area of cognitive radio wireless sensor networks, highlight the key research that has already been undertaken, and indicate open problems. This paper describes the advantages of cognitive radio wireless sensor networks, the difference between ad hoc cognitive radio networks, wireless sensor networks, and cognitive radio wireless sensor networks, potential application areas of cognitive radio wireless sensor networks, challenges and research trend in cognitive radio wireless sensor networks. The sensing schemes suited for cognitive radio wireless sensor networks scenarios are discussed with an emphasis on cooperation and spectrum access methods that ensure the availability of the required QoS. Finally, this paper lists several open research challenges aimed at drawing the attention of the readers toward the important issues that need to be addressed before the vision of completely autonomous cognitive radio wireless sensor networks can be realized.
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
2011-01-01
A new C-band (5091 to 5150 MHz) airport communications system designated as Aeronautical Mobile Airport Communications System (AeroMACS) is being planned under the Federal Aviation Administration s NextGen program. An interference analysis software program, Visualyse Professional (Transfinite Systems Ltd), is being utilized to provide guidelines on limitations for AeroMACS transmitters to avoid interference with other systems. A scenario consisting of a single omni-directional transmitting antenna at each of the major contiguous United States airports is modeled and the steps required to build the model are reported. The results are shown to agree very well with a previous study.
Siddique, Radwanul Hasan; Gomard, Guillaume; Hölscher, Hendrik
2015-04-22
The glasswing butterfly (Greta oto) has, as its name suggests, transparent wings with remarkable low haze and reflectance over the whole visible spectral range even for large view angles of 80°. This omnidirectional anti-reflection behaviour is caused by small nanopillars covering the transparent regions of its wings. In difference to other anti-reflection coatings found in nature, these pillars are irregularly arranged and feature a random height and width distribution. Here we simulate the optical properties with the effective medium theory and transfer matrix method and show that the random height distribution of pillars significantly reduces the reflection not only for normal incidence but also for high view angles.
NASA Astrophysics Data System (ADS)
Siddique, Radwanul Hasan; Gomard, Guillaume; Hölscher, Hendrik
2015-04-01
The glasswing butterfly (Greta oto) has, as its name suggests, transparent wings with remarkable low haze and reflectance over the whole visible spectral range even for large view angles of 80°. This omnidirectional anti-reflection behaviour is caused by small nanopillars covering the transparent regions of its wings. In difference to other anti-reflection coatings found in nature, these pillars are irregularly arranged and feature a random height and width distribution. Here we simulate the optical properties with the effective medium theory and transfer matrix method and show that the random height distribution of pillars significantly reduces the reflection not only for normal incidence but also for high view angles.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.
3D vision upgrade kit for the TALON robot system
NASA Astrophysics Data System (ADS)
Bodenhamer, Andrew; Pettijohn, Bradley; Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-02-01
In September 2009 the Fort Leonard Wood Field Element of the US Army Research Laboratory - Human Research and Engineering Directorate, in conjunction with Polaris Sensor Technologies and Concurrent Technologies Corporation, evaluated the objective performance benefits of Polaris' 3D vision upgrade kit for the TALON small unmanned ground vehicle (SUGV). This upgrade kit is a field-upgradable set of two stereo-cameras and a flat panel display, using only standard hardware, data and electrical connections existing on the TALON robot. Using both the 3D vision system and a standard 2D camera and display, ten active-duty Army Soldiers completed seven scenarios designed to be representative of missions performed by military SUGV operators. Mission time savings (6.5% to 32%) were found for six of the seven scenarios when using the 3D vision system. Operators were not only able to complete tasks quicker but, for six of seven scenarios, made fewer mistakes in their task execution. Subjective Soldier feedback was overwhelmingly in support of pursuing 3D vision systems, such as the one evaluated, for fielding to combat units.
NASA Astrophysics Data System (ADS)
Durfee, David; Johnson, Walter; McLeod, Scott
2007-04-01
Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.
Application of parallelized software architecture to an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Shakya, Rahul; Wright, Adam; Shin, Young Ho; Momin, Orko; Petkovsek, Steven; Wortman, Paul; Gautam, Prasanna; Norton, Adam
2011-01-01
This paper presents improvements made to Q, an autonomous ground vehicle designed to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2010 IGVC, Q was upgraded with a new parallelized software architecture and a new vision processor. Improvements were made to the power system reducing the number of batteries required for operation from six to one. In previous years, a single state machine was used to execute the bulk of processing activities including sensor interfacing, data processing, path planning, navigation algorithms and motor control. This inefficient approach led to poor software performance and made it difficult to maintain or modify. For IGVC 2010, the team implemented a modular parallel architecture using the National Instruments (NI) LabVIEW programming language. The new architecture divides all the necessary tasks - motor control, navigation, sensor data collection, etc. into well-organized components that execute in parallel, providing considerable flexibility and facilitating efficient use of processing power. Computer vision is used to detect white lines on the ground and determine their location relative to the robot. With the new vision processor and some optimization of the image processing algorithm used last year, two frames can be acquired and processed in 70ms. With all these improvements, Q placed 2nd in the autonomous challenge.
Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision.
Rumei Zhang; Hao Liu; Jianda Han
2017-07-01
Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.
The role of vision on hand preshaping during reach to grasp.
Winges, Sara A; Weber, Douglas J; Santello, Marco
2003-10-01
During reaching to grasp objects with different shapes hand posture is molded gradually to the object's contours. The present study examined the extent to which the temporal evolution of hand posture depends on continuous visual feedback. We asked subjects to reach and grasp objects with different shapes under five vision conditions (VCs). Subjects wore liquid crystal spectacles that occluded vision at four different latencies from onset of the reach. As a control, full-vision trials (VC5) were interspersed among the blocked vision trials. Object shapes and all VCs were presented to the subjects in random order. Hand posture was measured by 15 sensors embedded in a glove. Linear regression analysis, discriminant analysis, and information theory were used to assess the effect of removing vision on the temporal evolution of hand shape. We found that reach duration increased when vision was occluded early in the reach. This was caused primarily by a slower approach of the hand toward the object near the end of the reach. However, vision condition did not have a significant effect on the covariation patterns of joint rotations, indicating that the gradual evolution of hand posture occurs in a similar fashion regardless of vision. Discriminant analysis further supported this interpretation, as the extent to which hand posture resembled object shape and the rate at which hand posture discrimination occurred throughout the movement were similar across vision conditions. These results extend previous observations on memory-guided reaches by showing that continuous visual feedback of the hand and/or object is not necessary to allow the hand to gradually conform to object contours.
Chung, King; Mongeau, Luc; McKibben, Nicholas
2009-04-01
Wind noise can be a significant problem for hearing instrument users. This study examined the polar characteristics of flow noise at outputs of two behind-the-ear digital hearing aids, and a microphone mounted on the surface of a cylinder at flow velocities ranging from a gentle breeze (4.5 m/s) to a strong gale (22.5 m/s) . The hearing aids were programed in an anechoic chamber, and tested in a quiet wind tunnel for flow noise recordings. Flow noise levels were estimated by normalizing the overall gain of the hearing aids to 0 dB. The results indicated that the two hearing aids had similar flow noise characteristics: The noise level was generally the lowest when the microphone faced upstream, higher when the microphone faced downstream, and the highest for frontal and rearward incidence angles. Directional microphones often generated higher flow noise level than omnidirectional microphones but they could reduce far-field background noise, resulting in a lower ambient noise level than omnidirectional microphones. Data for the academic microphone- on-cylinder configuration suggested that both turbulence and flow impingement might have contributed to the generation of flow noise in the hearing aids. Clinical and engineering design applications are discussed.
Advanced Sensors Boost Optical Communication, Imaging
NASA Technical Reports Server (NTRS)
2009-01-01
Brooklyn, New York-based Amplification Technologies Inc. (ATI), employed Phase I and II SBIR funding from NASA s Jet Propulsion Laboratory to forward the company's solid-state photomultiplier technology. Under the SBIR, ATI developed a small, energy-efficient, extremely high-gain sensor capable of detecting light down to single photons in the near infrared wavelength range. The company has commercialized this technology in the form of its NIRDAPD photomultiplier, ideal for use in free space optical communications, lidar and ladar, night vision goggles, and other light sensing applications.
Testing and evaluation of tactical electro-optical sensors
NASA Astrophysics Data System (ADS)
Middlebrook, Christopher T.; Smith, John G.
2002-07-01
As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.
First Experiences with Kinect v2 Sensor for Close Range 3d Modelling
NASA Astrophysics Data System (ADS)
Lachat, E.; Macher, H.; Mittet, M.-A.; Landes, T.; Grussenmeyer, P.
2015-02-01
RGB-D cameras, also known as range imaging cameras, are a recent generation of sensors. As they are suitable for measuring distances to objects at high frame rate, such sensors are increasingly used for 3D acquisitions, and more generally for applications in robotics or computer vision. This kind of sensors became popular especially since the Kinect v1 (Microsoft) arrived on the market in November 2010. In July 2014, Windows has released a new sensor, the Kinect for Windows v2 sensor, based on another technology as its first device. However, due to its initial development for video games, the quality assessment of this new device for 3D modelling represents a major investigation axis. In this paper first experiences with Kinect v2 sensor are related, and the ability of close range 3D modelling is investigated. For this purpose, error sources on output data as well as a calibration approach are presented.
Los Alamos National Laboratory Search Site submit About Mission Business Newsroom Publications Los Innovation in New Mexico Los Alamos Collaboration for Explosives Detection (LACED) SensorNexus Exascale Computing Project (ECP) User Facilities Center for Integrated Nanotechnologies (CINT) Los Alamos Neutron
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
Brosed, Francisco Javier; Aguilar, Juan José; Guillomía, David; Santolaria, Jorge
2011-01-01
This article discusses different non contact 3D measuring strategies and presents a model for measuring complex geometry parts, manipulated through a robot arm, using a novel vision system consisting of a laser triangulation sensor and a motorized linear stage. First, the geometric model incorporating an automatic simple module for long term stability improvement will be outlined in the article. The new method used in the automatic module allows the sensor set up, including the motorized linear stage, for the scanning avoiding external measurement devices. In the measurement model the robot is just a positioning of parts with high repeatability. Its position and orientation data are not used for the measurement and therefore it is not directly “coupled” as an active component in the model. The function of the robot is to present the various surfaces of the workpiece along the measurement range of the vision system, which is responsible for the measurement. Thus, the whole system is not affected by the robot own errors following a trajectory, except those due to the lack of static repeatability. For the indirect link between the vision system and the robot, the original model developed needs only one first piece measuring as a “zero” or master piece, known by its accurate measurement using, for example, a Coordinate Measurement Machine. The strategy proposed presents a different approach to traditional laser triangulation systems on board the robot in order to improve the measurement accuracy, and several important cues for self-recalibration are explored using only a master piece. Experimental results are also presented to demonstrate the technique and the final 3D measurement accuracy. PMID:22346569
Vision Based Navigation for Autonomous Cooperative Docking of CubeSats
NASA Astrophysics Data System (ADS)
Pirat, Camille; Ankersen, Finn; Walker, Roger; Gass, Volker
2018-05-01
A realistic rendezvous and docking navigation solution applicable to CubeSats is investigated. The scalability analysis of the ESA Autonomous Transfer Vehicle Guidance, Navigation & Control (GNC) performances and the Russian docking system, shows that the docking of two CubeSats would require a lateral control performance of the order of 1 cm. Line of sight constraints and multipath effects affecting Global Navigation Satellite System (GNSS) measurements in close proximity prevent the use of this sensor for the final approach. This consideration and the high control accuracy requirement led to the use of vision sensors for the final 10 m of the rendezvous and docking sequence. A single monocular camera on the chaser satellite and various sets of Light-Emitting Diodes (LEDs) on the target vehicle ensure the observability of the system throughout the approach trajectory. The simple and novel formulation of the measurement equations allows differentiating unambiguously rotations from translations between the target and chaser docking port and allows a navigation performance better than 1 mm at docking. Furthermore, the non-linear measurement equations can be solved in order to provide an analytic navigation solution. This solution can be used to monitor the navigation filter solution and ensure its stability, adding an extra layer of robustness for autonomous rendezvous and docking. The navigation filter initialization is addressed in detail. The proposed method is able to differentiate LEDs signals from Sun reflections as demonstrated by experimental data. The navigation filter uses a comprehensive linearised coupled rotation/translation dynamics, describing the chaser to target docking port motion. The handover, between GNSS and vision sensor measurements, is assessed. The performances of the navigation function along the approach trajectory is discussed.
Dominguez Veiga, Jose Juan; O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-08-04
Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. ©Jose Juan Dominguez Veiga, Martin O'Reilly, Darragh Whelan, Brian Caulfield, Tomas E Ward. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 04.08.2017.
O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E
2017-01-01
Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers. Results With the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED. Conclusions The high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. PMID:28778851
GPS Multipath Fade Measurements to Determine L-Band Ground Reflectivity Properties
NASA Technical Reports Server (NTRS)
Kavak, Adnan; Xu, Guanghan; Vogel, W. J.
1996-01-01
In personal satellite communications, especially when the line-of-sight is clear, ground specular reflected signals along with direct signals are received by low gain, almost omni-directional subscriber antennas. A six-channel, C/A code processing, global positioning system (GPS) receiver with an almost omni-directional patch antenna was used to take measurements over three types of ground to characterize 1.575 GHz specular ground reflections and ground dielectric properties. Fade measurements were taken over grass, asphalt, and lake water surfaces by placing the antenna in a vertical position at a fixed height from the ground. Electrical characteristics (conductivity and dielectric constant) of these surfaces (grass, asphalt, lake water) were obtained by matching computer simulations to the experimental results.