Sample records for multisensor based robotic

  1. Adaptive multisensor fusion for planetary exploration rovers

    NASA Technical Reports Server (NTRS)

    Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri

    1992-01-01

    The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.

  2. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics

    PubMed Central

    Herrero, Héctor; Outón, Jose Luis; Puerto, Mildred; Sallé, Damien; López de Ipiña, Karmele

    2017-01-01

    This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques. PMID:28561750

  3. Enhanced Flexibility and Reusability through State Machine-Based Architectures for Multisensor Intelligent Robotics.

    PubMed

    Herrero, Héctor; Outón, Jose Luis; Puerto, Mildred; Sallé, Damien; López de Ipiña, Karmele

    2017-05-31

    This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques.

  4. A Method for Improving the Pose Accuracy of a Robot Manipulator Based on Multi-Sensor Combined Measurement and Data Fusion

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua

    2015-01-01

    An improvement method for the pose accuracy of a robot manipulator by using a multiple-sensor combination measuring system (MCMS) is presented. It is composed of a visual sensor, an angle sensor and a series robot. The visual sensor is utilized to measure the position of the manipulator in real time, and the angle sensor is rigidly attached to the manipulator to obtain its orientation. Due to the higher accuracy of the multi-sensor, two efficient data fusion approaches, the Kalman filter (KF) and multi-sensor optimal information fusion algorithm (MOIFA), are used to fuse the position and orientation of the manipulator. The simulation and experimental results show that the pose accuracy of the robot manipulator is improved dramatically by 38%∼78% with the multi-sensor data fusion. Comparing with reported pose accuracy improvement methods, the primary advantage of this method is that it does not require the complex solution of the kinematics parameter equations, increase of the motion constraints and the complicated procedures of the traditional vision-based methods. It makes the robot processing more autonomous and accurate. To improve the reliability and accuracy of the pose measurements of MCMS, the visual sensor repeatability is experimentally studied. An optimal range of 1 × 0.8 × 1 ∼ 2 × 0.8 × 1 m in the field of view (FOV) is indicated by the experimental results. PMID:25850067

  5. Multisensor-based human detection and tracking for mobile service robots.

    PubMed

    Bellotto, Nicola; Hu, Huosheng

    2009-02-01

    One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.

  6. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  7. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications †

    PubMed Central

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Muroyama, Masanori

    2017-01-01

    Robot tactile sensation can enhance human–robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as “sensor platform LSI”) as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated. PMID:29061954

  8. A Tactile Sensor Network System Using a Multiple Sensor Platform with a Dedicated CMOS-LSI for Robot Applications.

    PubMed

    Shao, Chenzhong; Tanaka, Shuji; Nakayama, Takahiro; Hata, Yoshiyuki; Bartley, Travis; Nonomura, Yutaka; Muroyama, Masanori

    2017-08-28

    Robot tactile sensation can enhance human-robot communication in terms of safety, reliability and accuracy. The final goal of our project is to widely cover a robot body with a large number of tactile sensors, which has significant advantages such as accurate object recognition, high sensitivity and high redundancy. In this study, we developed a multi-sensor system with dedicated Complementary Metal-Oxide-Semiconductor (CMOS) Large-Scale Integration (LSI) circuit chips (referred to as "sensor platform LSI") as a framework of a serial bus-based tactile sensor network system. The sensor platform LSI supports three types of sensors: an on-chip temperature sensor, off-chip capacitive and resistive tactile sensors, and communicates with a relay node via a bus line. The multi-sensor system was first constructed on a printed circuit board to evaluate basic functions of the sensor platform LSI, such as capacitance-to-digital and resistance-to-digital conversion. Then, two kinds of external sensors, nine sensors in total, were connected to two sensor platform LSIs, and temperature, capacitive and resistive sensing data were acquired simultaneously. Moreover, we fabricated flexible printed circuit cables to demonstrate the multi-sensor system with 15 sensor platform LSIs operating simultaneously, which showed a more realistic implementation in robots. In conclusion, the multi-sensor system with up to 15 sensor platform LSIs on a bus line supporting temperature, capacitive and resistive sensing was successfully demonstrated.

  9. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    PubMed Central

    Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro

    2014-01-01

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636

  10. Research on the attitude detection technology of the tetrahedron robot

    NASA Astrophysics Data System (ADS)

    Gong, Hao; Chen, Keshan; Ren, Wenqiang; Cai, Xin

    2017-10-01

    The traditional attitude detection technology can't tackle the problem of attitude detection of the polyhedral robot. Thus we propose a novel algorithm of multi-sensor data fusion which is based on Kalman filter. In the algorithm a tetrahedron robot is investigated. We devise an attitude detection system for the polyhedral robot and conduct the verification of data fusion algorithm. It turns out that the minimal attitude detection system we devise could capture attitudes of the tetrahedral robot in different working conditions. Thus the Kinematics model we establish for the tetrahedron robot is correct and the feasibility of the attitude detection system is proven.

  11. Automatic Operation For A Robot Lawn Mower

    NASA Astrophysics Data System (ADS)

    Huang, Y. Y.; Cao, Z. L.; Oh, S. J.; Kattan, E. U.; Hall, E. L.

    1987-02-01

    A domestic mobile robot, lawn mower, which performs the automatic operation mode, has been built up in the Center of Robotics Research, University of Cincinnati. The robot lawn mower automatically completes its work with the region filling operation, a new kind of path planning for mobile robots. Some strategies for region filling of path planning have been developed for a partly-known or a unknown environment. Also, an advanced omnidirectional navigation system and a multisensor-based control system are used in the automatic operation. Research on the robot lawn mower, especially on the region filling of path planning, is significant in industrial and agricultural applications.

  12. Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    PubMed Central

    Garcia, Gabriel J.; Corrales, Juan A.; Pomares, Jorge; Torres, Fernando

    2009-01-01

    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors. PMID:22303146

  13. International Assessment of Unmanned Ground Vehicles

    DTIC Science & Technology

    2008-02-01

    research relevant to ground robotics include • Multi-sensor data fusion • Stereovision • Dedicated robots, including legged robots, tracked robots...Technology Laboratory has developed several mobile robots with leg - ged, wheeled, rolling, rowing, and hybrid locomotion. Areas of particular emphasis...117 UK Department of Trade and Industry ( DTI ) Global Watch Mission. November 2006. Mechatronics in Russia. 118 CRDI Web Site: http

  14. Low Cost Multi-Sensor Robot Laser Scanning System and its Accuracy Investigations for Indoor Mapping Application

    NASA Astrophysics Data System (ADS)

    Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B.

    2017-11-01

    In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.

  15. Dynamic multisensor fusion for mobile robot navigation in an indoor environment

    NASA Astrophysics Data System (ADS)

    Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.

    2001-10-01

    In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.

  16. Evaluation of a novel chemical sensor system to detect clinical mastitis in bovine milk.

    PubMed

    Mottram, Toby; Rudnitskaya, Alisa; Legin, Andrey; Fitzpatrick, Julie L; Eckersall, P David

    2007-05-15

    Automatic detection of clinical mastitis is an essential part of high performance and robotic milking. Currently available technology (conductivity monitoring) is unable to achieve acceptable specificity or sensitivity of detection of clinical mastitis or other clinical diseases. Arrays of sensors with high cross-sensitivity have been successfully applied for recognition and quantitative analysis of other multicomponent liquids. An experiment was conducted to determine whether a multisensor system ("electronic tongue") based on an array of chemical sensors and suitable data processing could be used to discriminate between milk secretions from infected and healthy glands. Measurements were made with a multisensor system of milk samples from two different farms in two experiments. A total of 67 samples of milk from both mastitic and healthy glands were in two sets. It was demonstrated that the multisensor system could distinguish between control and clinically mastitic milk samples (p=0.05). The sensitivity and specificity of the sensor system (93 and 96% correspondingly) showed an improvement over conductivity (56 and 82% correspondingly). The multisensor system offers a novel method of improving mastitis detection.

  17. Integrated multi-sensor fusion for mapping and localization in outdoor environments for mobile robots

    NASA Astrophysics Data System (ADS)

    Emter, Thomas; Petereit, Janko

    2014-05-01

    An integrated multi-sensor fusion framework for localization and mapping for autonomous navigation in unstructured outdoor environments based on extended Kalman filters (EKF) is presented. The sensors for localization include an inertial measurement unit, a GPS, a fiber optic gyroscope, and wheel odometry. Additionally a 3D LIDAR is used for simultaneous localization and mapping (SLAM). A 3D map is built while concurrently a localization in a so far established 2D map is estimated with the current scan of the LIDAR. Despite of longer run-time of the SLAM algorithm compared to the EKF update, a high update rate is still guaranteed by sophisticatedly joining and synchronizing two parallel localization estimators.

  18. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence.

    PubMed

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-02-18

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.

  19. A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence

    PubMed Central

    Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia

    2016-01-01

    Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203

  20. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System

    PubMed Central

    Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan

    2017-01-01

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964

  1. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System.

    PubMed

    Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan

    2017-09-10

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

  2. Development of a multisensor-based bio-botanic robot and its implementation using a self-designed embedded board.

    PubMed

    Chang, Chung-Liang; Sie, Ming-Fong; Shie, Jin-Long

    2011-01-01

    This paper presents the design concept of a bio-botanic robot which demonstrates its behavior based on plant growth. Besides, it can reflect the different phases of plant growth depending on the proportional amounts of light, temperature and water. The mechanism design is made up of a processed aluminum base, spring, polydimethylsiloxane (PDMS) and actuator to constitute the plant base and plant body. The control system consists of two micro-controllers and a self-designed embedded development board where the main controller transmits the values of the environmental sensing module within the embedded board to a sub-controller. The sub-controller determines the growth stage, growth height, and time and transmits its decision value to the main controller. Finally, based on the data transmitted by the sub-controller, the main controller controls the growth phase of the bio-botanic robot using a servo motor and leaf actuator. The research result not only helps children realize the variation of plant growth but also is entertainment-educational through its demonstration of the growth process of the bio-botanic robot in a short time.

  3. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  4. Development of a Multisensor-Based Bio-Botanic Robot and Its Implementation Using a Self-Designed Embedded Board

    PubMed Central

    Chang, Chung-Liang; Sie, Ming-Fong; Shie, Jin-Long

    2011-01-01

    This paper presents the design concept of a bio-botanic robot which demonstrates its behavior based on plant growth. Besides, it can reflect the different phases of plant growth depending on the proportional amounts of light, temperature and water. The mechanism design is made up of a processed aluminum base, spring, polydimethylsiloxane (PDMS) and actuator to constitute the plant base and plant body. The control system consists of two micro-controllers and a self-designed embedded development board where the main controller transmits the values of the environmental sensing module within the embedded board to a sub-controller. The sub-controller determines the growth stage, growth height, and time and transmits its decision value to the main controller. Finally, based on the data transmitted by the sub-controller, the main controller controls the growth phase of the bio-botanic robot using a servo motor and leaf actuator. The research result not only helps children realize the variation of plant growth but also is entertainment-educational through its demonstration of the growth process of the bio-botanic robot in a short time. PMID:22247684

  5. Fault tolerant multi-sensor fusion based on the information gain

    NASA Astrophysics Data System (ADS)

    Hage, Joelle Al; El Najjar, Maan E.; Pomorski, Denis

    2017-01-01

    In the last decade, multi-robot systems are used in several applications like for example, the army, the intervention areas presenting danger to human life, the management of natural disasters, the environmental monitoring, exploration and agriculture. The integrity of localization of the robots must be ensured in order to achieve their mission in the best conditions. Robots are equipped with proprioceptive (encoders, gyroscope) and exteroceptive sensors (Kinect). However, these sensors could be affected by various faults types that can be assimilated to erroneous measurements, bias, outliers, drifts,… In absence of a sensor fault diagnosis step, the integrity and the continuity of the localization are affected. In this work, we present a muti-sensors fusion approach with Fault Detection and Exclusion (FDE) based on the information theory. In this context, we are interested by the information gain given by an observation which may be relevant when dealing with the fault tolerance aspect. Moreover, threshold optimization based on the quantity of information given by a decision on the true hypothesis is highlighted.

  6. Marker-Based Multi-Sensor Fusion Indoor Localization System for Micro Air Vehicles.

    PubMed

    Xing, Boyang; Zhu, Quanmin; Pan, Feng; Feng, Xiaoxue

    2018-05-25

    A novel multi-sensor fusion indoor localization algorithm based on ArUco marker is designed in this paper. The proposed ArUco mapping algorithm can build and correct the map of markers online with Grubbs criterion and K-mean clustering, which avoids the map distortion due to lack of correction. Based on the conception of multi-sensor information fusion, the federated Kalman filter is utilized to synthesize the multi-source information from markers, optical flow, ultrasonic and the inertial sensor, which can obtain a continuous localization result and effectively reduce the position drift due to the long-term loss of markers in pure marker localization. The proposed algorithm can be easily implemented in a hardware of one Raspberry Pi Zero and two STM32 micro controllers produced by STMicroelectronics (Geneva, Switzerland). Thus, a small-size and low-cost marker-based localization system is presented. The experimental results show that the speed estimation result of the proposed system is better than Px4flow, and it has the centimeter accuracy of mapping and positioning. The presented system not only gives satisfying localization precision, but also has the potential to expand other sensors (such as visual odometry, ultra wideband (UWB) beacon and lidar) to further improve the localization performance. The proposed system can be reliably employed in Micro Aerial Vehicle (MAV) visual localization and robotics control.

  7. The Modular Design and Production of an Intelligent Robot Based on a Closed-Loop Control Strategy.

    PubMed

    Zhang, Libo; Zhu, Junjie; Ren, Hao; Liu, Dongdong; Meng, Dan; Wu, Yanjun; Luo, Tiejian

    2017-10-14

    Intelligent robots are part of a new generation of robots that are able to sense the surrounding environment, plan their own actions and eventually reach their targets. In recent years, reliance upon robots in both daily life and industry has increased. The protocol proposed in this paper describes the design and production of a handling robot with an intelligent search algorithm and an autonomous identification function. First, the various working modules are mechanically assembled to complete the construction of the work platform and the installation of the robotic manipulator. Then, we design a closed-loop control system and a four-quadrant motor control strategy, with the aid of debugging software, as well as set steering gear identity (ID), baud rate and other working parameters to ensure that the robot achieves the desired dynamic performance and low energy consumption. Next, we debug the sensor to achieve multi-sensor fusion to accurately acquire environmental information. Finally, we implement the relevant algorithm, which can recognize the success of the robot's function for a given application. The advantage of this approach is its reliability and flexibility, as the users can develop a variety of hardware construction programs and utilize the comprehensive debugger to implement an intelligent control strategy. This allows users to set personalized requirements based on their needs with high efficiency and robustness.

  8. Integrated multi-sensor package (IMSP) for unmanned vehicle operations

    NASA Astrophysics Data System (ADS)

    Crow, Eddie C.; Reichard, Karl; Rogan, Chris; Callen, Jeff; Seifert, Elwood

    2007-10-01

    This paper describes recent efforts to develop integrated multi-sensor payloads for small robotic platforms for improved operator situational awareness and ultimately for greater robot autonomy. The focus is on enhancements to perception through integration of electro-optic, acoustic, and other sensors for navigation and inspection. The goals are to provide easier control and operation of the robot through fusion of multiple sensor outputs, to improve interoperability of the sensor payload package across multiple platforms through the use of open standards and architectures, and to reduce integration costs by embedded sensor data processing and fusion within the sensor payload package. The solutions investigated in this project to be discussed include: improved capture, processing and display of sensor data from multiple, non-commensurate sensors; an extensible architecture to support plug and play of integrated sensor packages; built-in health, power and system status monitoring using embedded diagnostics/prognostics; sensor payload integration into standard product forms for optimized size, weight and power; and the use of the open Joint Architecture for Unmanned Systems (JAUS)/ Society of Automotive Engineers (SAE) AS-4 interoperability standard. This project is in its first of three years. This paper will discuss the applicability of each of the solutions in terms of its projected impact to reducing operational time for the robot and teleoperator.

  9. Virtual- and real-world operation of mobile robotic manipulators: integrated simulation, visualization, and control environment

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.

    1992-03-01

    This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  10. Multi-sensor electrometer

    NASA Technical Reports Server (NTRS)

    Gompf, Raymond (Inventor); Buehler, Martin C. (Inventor)

    2003-01-01

    An array of triboelectric sensors is used for testing the electrostatic properties of a remote environment. The sensors may be mounted in the heel of a robot arm scoop. To determine the triboelectric properties of a planet surface, the robot arm scoop may be rubbed on the soil of the planet and the triboelectrically developed charge measured. By having an array of sensors, different insulating materials may be measured simultaneously. The insulating materials may be selected so their triboelectric properties cover a desired range. By mounting the sensor on a robot arm scoop, the measurements can be obtained during an unmanned mission.

  11. Trajectory Correction and Locomotion Analysis of a Hexapod Walking Robot with Semi-Round Rigid Feet

    PubMed Central

    Zhu, Yaguang; Jin, Bo; Wu, Yongsheng; Guo, Tong; Zhao, Xiangmo

    2016-01-01

    Aimed at solving the misplaced body trajectory problem caused by the rolling of semi-round rigid feet when a robot is walking, a legged kinematic trajectory correction methodology based on the Least Squares Support Vector Machine (LS-SVM) is proposed. The concept of ideal foothold is put forward for the three-dimensional kinematic model modification of a robot leg, and the deviation value between the ideal foothold and real foothold is analyzed. The forward/inverse kinematic solutions between the ideal foothold and joint angular vectors are formulated and the problem of direct/inverse kinematic nonlinear mapping is solved by using the LS-SVM. Compared with the previous approximation method, this correction methodology has better accuracy and faster calculation speed with regards to inverse kinematics solutions. Experiments on a leg platform and a hexapod walking robot are conducted with multi-sensors for the analysis of foot tip trajectory, base joint vibration, contact force impact, direction deviation, and power consumption, respectively. The comparative analysis shows that the trajectory correction methodology can effectively correct the joint trajectory, thus eliminating the contact force influence of semi-round rigid feet, significantly improving the locomotion of the walking robot and reducing the total power consumption of the system. PMID:27589766

  12. AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program

    NASA Astrophysics Data System (ADS)

    Gothard, Benny M.

    2002-02-01

    One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.

  13. Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia.

    PubMed

    Andreasen Struijk, Lotte N S; Egsgaard, Line Lindhardt; Lontis, Romulus; Gaihede, Michael; Bentsen, Bo

    2017-11-06

    For an individual with tetraplegia assistive robotic arms provide a potentially invaluable opportunity for rehabilitation. However, there is a lack of available control methods to allow these individuals to fully control the assistive arms. Here we show that it is possible for an individual with tetraplegia to use the tongue to fully control all 14 movements of an assistive robotic arm in a three dimensional space using a wireless intraoral control system, thus allowing for numerous activities of daily living. We developed a tongue-based robotic control method incorporating a multi-sensor inductive tongue interface. One abled-bodied individual and one individual with tetraplegia performed a proof of concept study by controlling the robot with their tongue using direct actuator control and endpoint control, respectively. After 30 min of training, the able-bodied experimental participant tongue controlled the assistive robot to pick up a roll of tape in 80% of the attempts. Further, the individual with tetraplegia succeeded in fully tongue controlling the assistive robot to reach for and touch a roll of tape in 100% of the attempts and to pick up the roll in 50% of the attempts. Furthermore, she controlled the robot to grasp a bottle of water and pour its contents into a cup; her first functional action in 19 years. To our knowledge, this is the first time that an individual with tetraplegia has been able to fully control an assistive robotic arm using a wireless intraoral tongue interface. The tongue interface used to control the robot is currently available for control of computers and of powered wheelchairs, and the robot employed in this study is also commercially available. Therefore, the presented results may translate into available solutions within reasonable time.

  14. Designing minimal space telerobotics systems for maximum performance

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Long, Mark K.; Steele, Robert D.

    1992-01-01

    The design of the remote site of a local-remote telerobot control system is described which addresses the constraints of limited computational power available at the remote site control system while providing a large range of control capabilities. The Modular Telerobot Task Execution System (MOTES) provides supervised autonomous control, shared control and teleoperation for a redundant manipulator. The system is capable of nominal task execution as well as monitoring and reflex motion. The MOTES system is minimized while providing a large capability by limiting its functionality to only that which is necessary at the remote site and by utilizing a unified multi-sensor based impedance control scheme. A command interpreter similar to one used on robotic spacecraft is used to interpret commands received from the local site. The system is written in Ada and runs in a VME environment on 68020 processors and initially controls a Robotics Research K1207 7 degree of freedom manipulator.

  15. Effects of Data Quality on the Characterization of Aerosol Properties from Multiple Sensors

    NASA Technical Reports Server (NTRS)

    Petrenko, Maksym; Ichoku, Charles; Leptoukh, Gregory

    2011-01-01

    Cross-comparison of aerosol properties between ground-based and spaceborne measurements is an important validation technique that helps to investigate the uncertainties of aerosol products acquired using spaceborne sensors. However, it has been shown that even minor differences in the cross-characterization procedure may significantly impact the results of such validation. Of particular consideration is the quality assurance I quality control (QA/QC) information - an auxiliary data indicating a "confidence" level (e.g., Bad, Fair, Good, Excellent, etc.) conferred by the retrieval algorithms on the produced data. Depending on the treatment of available QA/QC information, a cross-characterization procedure has the potential of filtering out invalid data points, such as uncertain or erroneous retrievals, which tend to reduce the credibility of such comparisons. However, under certain circumstances, even high QA/QC values may not fully guarantee the quality of the data. For example, retrievals in proximity of a cloud might be particularly perplexing for an aerosol retrieval algorithm, resulting in an invalid data that, nonetheless, could be assigned a high QA/QC confidence. In this presentation, we will study the effects of several QA/QC parameters on cross-characterization of aerosol properties between the data acquired by multiple spaceborne sensors. We will utilize the Multi-sensor Aerosol Products Sampling System (MAPSS) that provides a consistent platform for multi-sensor comparison, including collocation with measurements acquired by the ground-based Aerosol Robotic Network (AERONET), The multi-sensor spaceborne data analyzed include those acquired by the Terra-MODIS, Aqua-MODIS, Terra-MISR, Aura-OMI, Parasol-POLDER, and CalipsoCALIOP satellite instruments.

  16. The Multi-Sensor Aerosol Products Sampling System (MAPSS) for Integrated Analysis of Satellite Retrieval Uncertainties

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Petrenko, Maksym; Leptoukh, Gregory

    2010-01-01

    Among the known atmospheric constituents, aerosols represent the greatest uncertainty in climate research. Although satellite-based aerosol retrieval has practically become routine, especially during the last decade, there is often disagreement between similar aerosol parameters retrieved from different sensors, leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. As long as there is no consensus and the inconsistencies are not well characterized and understood ', there will be no way of developing reliable climate data records from satellite aerosol measurements. Fortunately, the most globally representative well-calibrated ground-based aerosol measurements corresponding to the satellite-retrieved products are available from the Aerosol Robotic Network (AERONET). To adequately utilize the advantages offered by this vital resource,., an online Multi-sensor Aerosol Products Sampling System (MAPSS) was recently developed. The aim of MAPSS is to facilitate detailed comparative analysis of satellite aerosol measurements from different sensors (Terra-MODIS, Aqua-MODIS, Terra-MISR, Aura-OMI, Parasol-POLDER, and Calipso-CALIOP) based on the collocation of these data products over AERONET stations. In this presentation, we will describe the strategy of the MAPSS system, its potential advantages for the aerosol community, and the preliminary results of an integrated comparative uncertainty analysis of aerosol products from multiple satellite sensors.

  17. Fast obstacle detection based on multi-sensor information fusion

    NASA Astrophysics Data System (ADS)

    Lu, Linli; Ying, Jie

    2014-11-01

    Obstacle detection is one of the key problems in areas such as driving assistance and mobile robot navigation, which cannot meet the actual demand by using a single sensor. A method is proposed to realize the real-time access to the information of the obstacle in front of the robot and calculating the real size of the obstacle area according to the mechanism of the triangle similarity in process of imaging by fusing datum from a camera and an ultrasonic sensor, which supports the local path planning decision. In the part of image analyzing, the obstacle detection region is limited according to complementary principle. We chose ultrasonic detection range as the region for obstacle detection when the obstacle is relatively near the robot, and the travelling road area in front of the robot is the region for a relatively-long-distance detection. The obstacle detection algorithm is adapted from a powerful background subtraction algorithm ViBe: Visual Background Extractor. We extracted an obstacle free region in front of the robot in the initial frame, this region provided a reference sample set of gray scale value for obstacle detection. Experiments of detecting different obstacles at different distances respectively, give the accuracy of the obstacle detection and the error percentage between the calculated size and the actual size of the detected obstacle. Experimental results show that the detection scheme can effectively detect obstacles in front of the robot and provide size of the obstacle with relatively high dimensional accuracy.

  18. Vision technology/algorithms for space robotics applications

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar; Defigueiredo, Rui J. P.

    1987-01-01

    The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed.

  19. Multi-Sensor Person Following in Low-Visibility Scenarios

    PubMed Central

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment. PMID:22163506

  20. Multi-sensor person following in low-visibility scenarios.

    PubMed

    Sales, Jorge; Marín, Raúl; Cervera, Enric; Rodríguez, Sergio; Pérez, Javier

    2010-01-01

    Person following with mobile robots has traditionally been an important research topic. It has been solved, in most cases, by the use of machine vision or laser rangefinders. In some special circumstances, such as a smoky environment, the use of optical sensors is not a good solution. This paper proposes and compares alternative sensors and methods to perform a person following in low visibility conditions, such as smoky environments in firefighting scenarios. The use of laser rangefinder and sonar sensors is proposed in combination with a vision system that can determine the amount of smoke in the environment. The smoke detection algorithm provides the robot with the ability to use a different combination of sensors to perform robot navigation and person following depending on the visibility in the environment.

  1. The Performance Analysis of AN Indoor Mobile Mapping System with Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Tsai, G. J.; Chiang, K. W.; Chu, C. H.; Chen, Y. L.; El-Sheimy, N.; Habib, A.

    2015-08-01

    Over the years, Mobile Mapping Systems (MMSs) have been widely applied to urban mapping, path management and monitoring and cyber city, etc. The key concept of mobile mapping is based on positioning technology and photogrammetry. In order to achieve the integration, multi-sensor integrated mapping technology has clearly established. In recent years, the robotic technology has been rapidly developed. The other mapping technology that is on the basis of low-cost sensor has generally used in robotic system, it is known as the Simultaneous Localization and Mapping (SLAM). The objective of this study is developed a prototype of indoor MMS for mobile mapping applications, especially to reduce the costs and enhance the efficiency of data collection and validation of direct georeferenced (DG) performance. The proposed indoor MMS is composed of a tactical grade Inertial Measurement Unit (IMU), the Kinect RGB-D sensor and light detection, ranging (LIDAR) and robot. In summary, this paper designs the payload for indoor MMS to generate the floor plan. In first session, it concentrates on comparing the different positioning algorithms in the indoor environment. Next, the indoor plans are generated by two sensors, Kinect RGB-D sensor LIDAR on robot. Moreover, the generated floor plan will compare with the known plan for both validation and verification.

  2. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Lab

    NASA Technical Reports Server (NTRS)

    Brewster, Linda L.; Howard, Richard T.; Johnston, A. S.; Carrington, Connie; Mitchell, Jennifer D.; Cryan, Scott P.

    2008-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success ofthe Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-Ioop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of "pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL) using the FRL's 6-DOF gantry system, called the Dynamic Overhead Target System (DOTS). The target vehicle for "docking" in the laboratory was a mockup that was representative of the proposed CEV docking system, with added retroreflectors for the AVGS.' The multi-sensor test configuration used 35 open-loop test trajectories covering three major objectives: (l) sensor characterization trajectories designed to test a wide range of performance parameters; (2) CEV-specific trajectories designed to test performance during CEV-like approach and departure profiles; and (3) sensor characterization tests designed for evaluating sensor performance under more extreme conditions as might be induced during a spacecraft failure or during contingency situations. This paper describes the test development, test facility, test preparations, test execution, and test results of the multisensor series oftrajectories

  3. Study on the multi-sensors monitoring and information fusion technology of dangerous cargo container

    NASA Astrophysics Data System (ADS)

    Xu, Shibo; Zhang, Shuhui; Cao, Wensheng

    2017-10-01

    In this paper, monitoring system of dangerous cargo container based on multi-sensors is presented. In order to improve monitoring accuracy, multi-sensors will be applied inside of dangerous cargo container. Multi-sensors information fusion solution of monitoring dangerous cargo container is put forward, and information pre-processing, the fusion algorithm of homogenous sensors and information fusion based on BP neural network are illustrated, applying multi-sensors in the field of container monitoring has some novelty.

  4. A survey of simultaneous localization and mapping on unstructured lunar complex environment

    NASA Astrophysics Data System (ADS)

    Wang, Yiqiao; Zhang, Wei; An, Pei

    2017-10-01

    Simultaneous localization and mapping (SLAM) technology is the key to realizing lunar rover's intelligent perception and autonomous navigation. It embodies the autonomous ability of mobile robot, and has attracted plenty of concerns of researchers in the past thirty years. Visual sensors are meaningful to SLAM research because they can provide a wealth of information. Visual SLAM uses merely images as external information to estimate the location of the robot and construct the environment map. Nowadays, SLAM technology still has problems when applied in large-scale, unstructured and complex environment. Based on the latest technology in the field of visual SLAM, this paper investigates and summarizes the SLAM technology using in the unstructured complex environment of lunar surface. In particular, we focus on summarizing and comparing the detection and matching of features of SIFT, SURF and ORB, in the meanwhile discussing their advantages and disadvantages. We have analyzed the three main methods: SLAM Based on Extended Kalman Filter, SLAM Based on Particle Filter and SLAM Based on Graph Optimization (EKF-SLAM, PF-SLAM and Graph-based SLAM). Finally, this article summarizes and discusses the key scientific and technical difficulties in the lunar context that Visual SLAM faces. At the same time, we have explored the frontier issues such as multi-sensor fusion SLAM and multi-robot cooperative SLAM technology. We also predict and prospect the development trend of lunar rover SLAM technology, and put forward some ideas of further research.

  5. Technology for robotic surface inspection in space

    NASA Technical Reports Server (NTRS)

    Volpe, Richard; Balaram, J.

    1994-01-01

    This paper presents on-going research in robotic inspection of space platforms. Three main areas of investigation are discussed: machine vision inspection techniques, an integrated sensor end-effector, and an orbital environment laboratory simulation. Machine vision inspection utilizes automatic comparison of new and reference images to detect on-orbit induced damage such as micrometeorite impacts. The cameras and lighting used for this inspection are housed in a multisensor end-effector, which also contains a suite of sensors for detection of temperature, gas leaks, proximity, and forces. To fully test all of these sensors, a realistic space platform mock-up has been created, complete with visual, temperature, and gas anomalies. Further, changing orbital lighting conditions are effectively mimicked by a robotic solar simulator. In the paper, each of these technology components will be discussed, and experimental results are provided.

  6. Multi-Sensor Aerosol Products Sampling System

    NASA Technical Reports Server (NTRS)

    Petrenko, M.; Ichoku, C.; Leptoukh, G.

    2011-01-01

    Global and local properties of atmospheric aerosols have been extensively observed and measured using both spaceborne and ground-based instruments, especially during the last decade. Unique properties retrieved by the different instruments contribute to an unprecedented availability of the most complete set of complimentary aerosol measurements ever acquired. However, some of these measurements remain underutilized, largely due to the complexities involved in analyzing them synergistically. To characterize the inconsistencies and bridge the gap that exists between the sensors, we have established a Multi-sensor Aerosol Products Sampling System (MAPSS), which consistently samples and generates the spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of aerosol products from multiple spacebome sensors, including MODIS (on Terra and Aqua), MISR, OMI, POLDER, CALIOP, and SeaWiFS. Samples of satellite aerosol products are extracted over Aerosol Robotic Network (AERONET) locations as well as over other locations of interest such as those with available ground-based aerosol observations. In this way, MAPSS enables a direct cross-characterization and data integration between Level-2 aerosol observations from multiple sensors. In addition, the available well-characterized co-located ground-based data provides the basis for the integrated validation of these products. This paper explains the sampling methodology and concepts used in MAPSS, and demonstrates specific examples of using MAPSS for an integrated analysis of multiple aerosol products.

  7. Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force

    NASA Astrophysics Data System (ADS)

    Muis, Abdul; Ohnishi, Kouhei

    Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.

  8. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  9. STARR: shortwave-targeted agile Raman robot for the detection and identification of emplaced explosives

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Gardner, Charles W.

    2014-05-01

    In order to combat the threat of emplaced explosives (land mines, etc.), ChemImage Sensor Systems (CISS) has developed a multi-sensor, robot mounted sensor capable of identification and confirmation of potential threats. The system, known as STARR (Shortwave-infrared Targeted Agile Raman Robot), utilizes shortwave infrared spectroscopy for the identification of potential threats, combined with a visible short-range standoff Raman hyperspectral imaging (HSI) system for material confirmation. The entire system is mounted onto a Talon UGV (Unmanned Ground Vehicle), giving the sensor an increased area search rate and reducing the risk of injury to the operator. The Raman HSI system utilizes a fiber array spectral translator (FAST) for the acquisition of high quality Raman chemical images, allowing for increased sensitivity and improved specificity. An overview of the design and operation of the system will be presented, along with initial detection results of the fusion sensor.

  10. Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Brewster, L.; Johnston, A.; Howard, R.; Mitchell, J.; Cryan, S.

    2007-01-01

    The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-loop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of"pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL) using the FRL's 6-DOF gantry system, called the Dynamic Overhead Target System (DOTS). The target vehicle for "docking" in the laboratory was a mockup that was representative of the proposed CEV docking system, with added retroreflectors for the AVGS. The multi-sensor test configuration used 35 open-loop test trajectories covering three major objectives: (1) sensor characterization trajectories designed to test a wide range of performance parameters; (2) CEV-specific trajectories designed to test performance during CEV-like approach and departure profiles; and (3) sensor characterization tests designed for evaluating sensor performance under more extreme conditions as might be induced during a spacecraft failure or during contingency situations. This paper describes the test development, test facility, test preparations, test execution, and test results of the multi-sensor series of trajectories.

  11. The use of multisensor data for robotic applications

    NASA Technical Reports Server (NTRS)

    Abidi, M. A.; Gonzalez, R. C.

    1990-01-01

    The feasibility of realistic autonomous space manipulation tasks using multisensory information is shown through two experiments involving a fluid interchange system and a module interchange system. In both cases, autonomous location of the mating element, autonomous location of the guiding light target, mating, and demating of the system were performed. Specifically, vision-driven techniques were implemented to determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. The robotic system was also equipped with a force/torque sensor that continuously monitored the six components of force and torque exerted on the end effector. Using vision, force, torque, proximity, and touch sensors, the two experiments were completed successfully and autonomously.

  12. A multi-sensor RSS spatial sensing-based robust stochastic optimization algorithm for enhanced wireless tethering.

    PubMed

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-12-12

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  13. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    PubMed Central

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

  14. Attention control learning in the decision space using state estimation

    NASA Astrophysics Data System (ADS)

    Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid

    2016-05-01

    The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.

  15. Reducing Multisensor Satellite Monthly Mean Aerosol Optical Depth Uncertainty: 1. Objective Assessment of Current AERONET Locations

    NASA Technical Reports Server (NTRS)

    Li, Jing; Li, Xichen; Carlson, Barbara E.; Kahn, Ralph A.; Lacis, Andrew A.; Dubovik, Oleg; Nakajima, Teruyuki

    2016-01-01

    Various space-based sensors have been designed and corresponding algorithms developed to retrieve aerosol optical depth (AOD), the very basic aerosol optical property, yet considerable disagreement still exists across these different satellite data sets. Surface-based observations aim to provide ground truth for validating satellite data; hence, their deployment locations should preferably contain as much spatial information as possible, i.e., high spatial representativeness. Using a novel Ensemble Kalman Filter (EnKF)- based approach, we objectively evaluate the spatial representativeness of current Aerosol Robotic Network (AERONET) sites. Multisensor monthly mean AOD data sets from Moderate Resolution Imaging Spectroradiometer, Multiangle Imaging Spectroradiometer, Sea-viewing Wide Field-of-view Sensor, Ozone Monitoring Instrument, and Polarization and Anisotropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar are combined into a 605-member ensemble, and AERONET data are considered as the observations to be assimilated into this ensemble using the EnKF. The assessment is made by comparing the analysis error variance (that has been constrained by ground-based measurements), with the background error variance (based on satellite data alone). Results show that the total uncertainty is reduced by approximately 27% on average and could reach above 50% over certain places. The uncertainty reduction pattern also has distinct seasonal patterns, corresponding to the spatial distribution of seasonally varying aerosol types, such as dust in the spring for Northern Hemisphere and biomass burning in the fall for Southern Hemisphere. Dust and biomass burning sites have the highest spatial representativeness, rural and oceanic sites can also represent moderate spatial information, whereas the representativeness of urban sites is relatively localized. A spatial score ranging from 1 to 3 is assigned to each AERONET site based on the uncertainty reduction, indicating its representativeness level.

  16. A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems

    PubMed Central

    Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon

    2017-01-01

    This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors. PMID:28368355

  17. A Reconfigurable Readout Integrated Circuit for Heterogeneous Display-Based Multi-Sensor Systems.

    PubMed

    Park, Kyeonghwan; Kim, Seung Mok; Eom, Won-Jin; Kim, Jae Joon

    2017-04-03

    This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC) for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR) of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS) process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors.

  18. Heterogeneous Multi-Robot Multi-Sensor Platform for Intruder Detection

    DTIC Science & Technology

    2009-09-15

    propagation model, with variance τi: si ~ N(b0i + b1i *logDi, τ i). The initial parameters (b0i, b1i, τ i ) of the model are unknown, and the training...that the advantage of MOO-learned mode would become more significant over time compared with the other mode. 1 2 3 4 5 6 7 0 0.05 0.1 0.15 0.2...nondominated sorting genetic algorithm for multi-objective optimization: NSGA-II,” in Parallel Problem Solving from Nature (PPSN VI), M. Schoenauer

  19. The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Han, Baoling

    2016-11-01

    The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.

  20. Sensor fusion of monocular cameras and laser rangefinders for line-based Simultaneous Localization and Mapping (SLAM) tasks in autonomous mobile robots.

    PubMed

    Zhang, Xinzheng; Rad, Ahmad B; Wong, Yiu-Kwong

    2012-01-01

    This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.

  1. A smart multisensor approach to assist blind people in specific urban navigation tasks.

    PubMed

    Ando, B

    2008-12-01

    Visually impaired people are often discouraged in using electronic aids due to complexity of operation, large amount of training, nonoptimized degree of information provided to the user, and high cost. In this paper, a new multisensor architecture is discussed, which would help blind people to perform urban mobility tasks. The device is based on a multisensor strategy and adopts smart signal processing.

  2. Multisensor system and artificial intelligence in housing for the elderly.

    PubMed

    Chan, M; Bocquet, H; Campo, E; Val, T; Estève, D; Pous, J

    1998-01-01

    To improve the safety of a growing proportion of elderly and disabled people in the developed countries, a multisensor system based on Artificial Intelligence (AI), Advanced Telecommunications (AT) and Information Technology (IT) has been devised and fabricated. Thus, the habits and behaviours of these populations will be recorded without disturbing their daily activities. AI will diagnose any abnormal behavior or change and the system will warn the professionals. Gerontology issues are presented together with the multisensor system, the AI-based learning and diagnosis methodology and the main functionalities.

  3. Chemometric analysis of multisensor hyperspectral images of precipitated atmospheric particulate matter.

    PubMed

    Ofner, Johannes; Kamilli, Katharina A; Eitenberger, Elisabeth; Friedbacher, Gernot; Lendl, Bernhard; Held, Andreas; Lohninger, Hans

    2015-09-15

    The chemometric analysis of multisensor hyperspectral data allows a comprehensive image-based analysis of precipitated atmospheric particles. Atmospheric particulate matter was precipitated on aluminum foils and analyzed by Raman microspectroscopy and subsequently by electron microscopy and energy dispersive X-ray spectroscopy. All obtained images were of the same spot of an area of 100 × 100 μm(2). The two hyperspectral data sets and the high-resolution scanning electron microscope images were fused into a combined multisensor hyperspectral data set. This multisensor data cube was analyzed using principal component analysis, hierarchical cluster analysis, k-means clustering, and vertex component analysis. The detailed chemometric analysis of the multisensor data allowed an extensive chemical interpretation of the precipitated particles, and their structure and composition led to a comprehensive understanding of atmospheric particulate matter.

  4. Quaternion-Based Unscented Kalman Filter for Accurate Indoor Heading Estimation Using Wearable Multi-Sensor System

    PubMed Central

    Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng

    2015-01-01

    Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384

  5. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter

    PubMed Central

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Gu, Chengfan

    2018-01-01

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation. PMID:29415509

  6. Multi-Sensor Optimal Data Fusion Based on the Adaptive Fading Unscented Kalman Filter.

    PubMed

    Gao, Bingbing; Hu, Gaoge; Gao, Shesheng; Zhong, Yongmin; Gu, Chengfan

    2018-02-06

    This paper presents a new optimal data fusion methodology based on the adaptive fading unscented Kalman filter for multi-sensor nonlinear stochastic systems. This methodology has a two-level fusion structure: at the bottom level, an adaptive fading unscented Kalman filter based on the Mahalanobis distance is developed and serves as local filters to improve the adaptability and robustness of local state estimations against process-modeling error; at the top level, an unscented transformation-based multi-sensor optimal data fusion for the case of N local filters is established according to the principle of linear minimum variance to calculate globally optimal state estimation by fusion of local estimations. The proposed methodology effectively refrains from the influence of process-modeling error on the fusion solution, leading to improved adaptability and robustness of data fusion for multi-sensor nonlinear stochastic systems. It also achieves globally optimal fusion results based on the principle of linear minimum variance. Simulation and experimental results demonstrate the efficacy of the proposed methodology for INS/GNSS/CNS (inertial navigation system/global navigation satellite system/celestial navigation system) integrated navigation.

  7. The GEOS-5 Neural Network Retrieval for AOD

    NASA Astrophysics Data System (ADS)

    Castellanos, P.; da Silva, A. M., Jr.

    2017-12-01

    One of the difficulties in data assimilation is the need for multi-sensor data merging that can account for temporal and spatial biases between satellite sensors. In the Goddard Earth Observing System Model Version 5 (GEOS-5) aerosol data assimilation system, a neural network retrieval (NNR) is used as a mapping between satellite observed top of the atmosphere (TOA) reflectance and AOD, which is the target variable that is assimilated in the model. By training observations of TOA reflectance from multiple sensors to map to a common AOD dataset (in this case AOD observed by the ground based Aerosol Robotic Network, AERONET), we are able to create a global, homogenous, satellite data record of AOD from MODIS observations on board the Terra and Aqua satellites. In this talk, I will present the implementation of and recent updates to the GEOS-5 NNR for MODIS collection 6 data.

  8. The GEOS-5 Neural Network Retrieval (NNR) for AOD

    NASA Technical Reports Server (NTRS)

    Castellanos, Patricia; Da Silva, Arlindo

    2017-01-01

    One of the difficulties in data assimilation is the need for multi-sensor data merging that can account for temporal and spatial biases between satellite sensors. In the Goddard Earth Observing System Model Version 5 (GEOS-5) aerosol data assimilation system, a neural network retrieval (NNR) is used as a mapping between satellite observed top of the atmosphere (TOA) reflectance and AOD, which is the target variable that is assimilated in the model. By training observations of TOA reflectance from multiple sensors to map to a common AOD dataset (in this case AOD observed by the ground based Aerosol Robotic Network, AERONET), we are able to create a global, homogenous, satellite data record of AOD from MODIS observations on board the Terra and Aqua satellites. In this talk, I will present the implementation of and recent updates to the GEOS-5 NNR for MODIS collection 6 data.

  9. An adaptive Hidden Markov Model for activity recognition based on a wearable multi-sensor device

    USDA-ARS?s Scientific Manuscript database

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based o...

  10. A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation.

    PubMed

    Tkach, Itshak; Jevtić, Aleksandar; Nof, Shimon Y; Edan, Yael

    2018-03-02

    Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors' performance, tasks' priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems.

  11. A Modified Distributed Bees Algorithm for Multi-Sensor Task Allocation †

    PubMed Central

    Nof, Shimon Y.; Edan, Yael

    2018-01-01

    Multi-sensor systems can play an important role in monitoring tasks and detecting targets. However, real-time allocation of heterogeneous sensors to dynamic targets/tasks that are unknown a priori in their locations and priorities is a challenge. This paper presents a Modified Distributed Bees Algorithm (MDBA) that is developed to allocate stationary heterogeneous sensors to upcoming unknown tasks using a decentralized, swarm intelligence approach to minimize the task detection times. Sensors are allocated to tasks based on sensors’ performance, tasks’ priorities, and the distances of the sensors from the locations where the tasks are being executed. The algorithm was compared to a Distributed Bees Algorithm (DBA), a Bees System, and two common multi-sensor algorithms, market-based and greedy-based algorithms, which were fitted for the specific task. Simulation analyses revealed that MDBA achieved statistically significant improved performance by 7% with respect to DBA as the second-best algorithm, and by 19% with respect to Greedy algorithm, which was the worst, thus indicating its fitness to provide solutions for heterogeneous multi-sensor systems. PMID:29498683

  12. Multi-sensor image registration based on algebraic projective invariants.

    PubMed

    Li, Bin; Wang, Wei; Ye, Hao

    2013-04-22

    A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.

  13. NASA technology applications team: Applications of aerospace technology

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Two critical aspects of the Applications Engineering Program were especially successful: commercializing products of Application Projects; and leveraging NASA funds for projects by developing cofunding from industry and other agencies. Results are presented in the following areas: the excimer laser was commercialized for clearing plaque in the arteries of patients with coronary artery disease; the ultrasound burn depth analysis technology is to be licensed and commercialized; a phased commercialization plan was submitted to NASA for the intracranial pressure monitor; the Flexible Agricultural Robotics Manipulator System (FARMS) is making progress in the development of sensors and a customized end effector for a roboticized greenhouse operation; a dual robot are controller was improved; a multisensor urodynamic pressure catherer was successful in clinical tests; commercial applications were examined for diamond like carbon coatings; further work was done on the multichannel flow cytometer; progress on the liquid airpack for fire fighters; a wind energy conversion device was tested in a low speed wind tunnel; and the Space Shuttle Thermal Protection System was reviewed.

  14. Multisensor signal denoising based on matching synchrosqueezing wavelet transform for mechanical fault condition assessment

    NASA Astrophysics Data System (ADS)

    Yi, Cancan; Lv, Yong; Xiao, Han; Huang, Tao; You, Guanghui

    2018-04-01

    Since it is difficult to obtain the accurate running status of mechanical equipment with only one sensor, multisensor measurement technology has attracted extensive attention. In the field of mechanical fault diagnosis and condition assessment based on vibration signal analysis, multisensor signal denoising has emerged as an important tool to improve the reliability of the measurement result. A reassignment technique termed the synchrosqueezing wavelet transform (SWT) has obvious superiority in slow time-varying signal representation and denoising for fault diagnosis applications. The SWT uses the time-frequency reassignment scheme, which can provide signal properties in 2D domains (time and frequency). However, when the measured signal contains strong noise components and fast varying instantaneous frequency, the performance of SWT-based analysis still depends on the accuracy of instantaneous frequency estimation. In this paper, a matching synchrosqueezing wavelet transform (MSWT) is investigated as a potential candidate to replace the conventional synchrosqueezing transform for the applications of denoising and fault feature extraction. The improved technology utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time-frequency representation so that the signal resolution can be significantly improved. To exploit inter-channel dependencies, the multisensor denoising strategy is performed by using a modulated multivariate oscillation model to partition the time-frequency domain; then, the common characteristics of the multivariate data can be effectively identified. Furthermore, a modified universal threshold is utilized to remove noise components, while the signal components of interest can be retained. Thus, a novel MSWT-based multisensor signal denoising algorithm is proposed in this paper. The validity of this method is verified by numerical simulation, and experiments including a rolling bearing system and a gear system. The results show that the proposed multisensor matching synchronous squeezing wavelet transform (MMSWT) is superior to existing methods.

  15. Laboratory evaluation of dual-frequency multisensor capacitance probes to monitor soil water and salinity

    USDA-ARS?s Scientific Manuscript database

    Real-time information on salinity levels and transport of fertilizers are generally missing from soil profile knowledge bases. A dual-frequency multisensor capacitance probe (MCP) is now commercially available for sandy soils that simultaneously monitor volumetric soil water content (VWC, ') and sa...

  16. Measurement of chlorine concentration on steel surfaces via fiber-optic laser-induced breakdown spectroscopy in double-pulse configuration

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Le Berre, S.; Fobar, D. G.; Burger, M.; Skrodzki, P. J.; Hartig, K. C.; Motta, A. T.; Jovanovic, I.

    2018-03-01

    The corrosive environment provided by chlorine ions on the welds of stainless steel dry cask storage canisters for used nuclear fuel may contribute to the occurrence of stress corrosion cracking. We demonstrate the use of fiber-optic laser-induced breakdown spectroscopy (FOLIBS) in the double-pulse (DP) configuration for high-sensitivity, remote measurement of the surface concentrations of chlorine compatible in constrained space and challenging environment characteristic for dry cask storage systems. Chlorine surface concentrations as low as 5 mg/m2 have been detected and quantified by use of a laboratory-based and a fieldable DP FOLIBS setup with the calibration curve approach. The compact final optics assembly in the fieldable setup is interfaced via two 25-m long optical fibers for high-power laser pulse delivery and plasma emission collection and can be readily integrated into a multi-sensor robotic delivery system for in-situ inspection of dry cask storage systems.

  17. Adaptive and mobile ground sensor array.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, Michael Warren; O'Rourke, William T.; Zenner, Jennifer

    The goal of this LDRD was to demonstrate the use of robotic vehicles for deploying and autonomously reconfiguring seismic and acoustic sensor arrays with high (centimeter) accuracy to obtain enhancement of our capability to locate and characterize remote targets. The capability to accurately place sensors and then retrieve and reconfigure them allows sensors to be placed in phased arrays in an initial monitoring configuration and then to be reconfigured in an array tuned to the specific frequencies and directions of the selected target. This report reviews the findings and accomplishments achieved during this three-year project. This project successfully demonstrated autonomousmore » deployment and retrieval of a payload package with an accuracy of a few centimeters using differential global positioning system (GPS) signals. It developed an autonomous, multisensor, temporally aligned, radio-frequency communication and signal processing capability, and an array optimization algorithm, which was implemented on a digital signal processor (DSP). Additionally, the project converted the existing single-threaded, monolithic robotic vehicle control code into a multi-threaded, modular control architecture that enhances the reuse of control code in future projects.« less

  18. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.

    PubMed

    Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao

    2015-11-06

    Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low.

  19. Distributed multi-sensor particle filter for bearings-only tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Jungen; Ji, Hongbing

    2012-02-01

    In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.

  20. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  1. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  2. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  3. A scale space feature based registration technique for fusion of satellite imagery

    NASA Technical Reports Server (NTRS)

    Raghavan, Srini; Cromp, Robert F.; Campbell, William C.

    1997-01-01

    Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.

  4. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety

    PubMed Central

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-01-01

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931

  5. A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety.

    PubMed

    Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang

    2016-06-09

    Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.

  6. Techniques for Sea Ice Characteristics Extraction and Sea Ice Monitoring Using Multi-Sensor Satellite Data in the Bohai Sea-Dragon 3 Programme Final Report (2012-2016)

    NASA Astrophysics Data System (ADS)

    Zhang, Xi; Zhang, Jie; Meng, Junmin

    2016-08-01

    The objectives of Dragon-3 programme (ID: 10501) are to develop methods for classification sea ice types and retrieving ice thickness based on multi-sensor data. In this final results paper, we give a briefly introduction for our research work and mainly results. Key words: the Bohai Sea ice, Sea ice, optical and

  7. Biologically-inspired robust and adaptive multi-sensor fusion and active control

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    In this paper, we describe a method and system for robust and efficient goal-oriented active control of a machine (e.g., robot) based on processing, hierarchical spatial understanding, representation and memory of multimodal sensory inputs. This work assumes that a high-level plan or goal is known a priori or is provided by an operator interface, which translates into an overall perceptual processing strategy for the machine. Its analogy to the human brain is the download of plans and decisions from the pre-frontal cortex into various perceptual working memories as a perceptual plan that then guides the sensory data collection and processing. For example, a goal might be to look for specific colored objects in a scene while also looking for specific sound sources. This paper combines three key ideas and methods into a single closed-loop active control system. (1) Use high-level plan or goal to determine and prioritize spatial locations or waypoints (targets) in multimodal sensory space; (2) collect/store information about these spatial locations at the appropriate hierarchy and representation in a spatial working memory. This includes invariant learning of these spatial representations and how to convert between them; and (3) execute actions based on ordered retrieval of these spatial locations from hierarchical spatial working memory and using the "right" level of representation that can efficiently translate into motor actions. In its most specific form, the active control is described for a vision system (such as a pantilt- zoom camera system mounted on a robotic head and neck unit) which finds and then fixates on high saliency visual objects. We also describe the approach where the goal is to turn towards and sequentially foveate on salient multimodal cues that include both visual and auditory inputs.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubart, Philippe; Hautot, Felix; Morichi, Massimo

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approachmore » and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)« less

  9. SVM-based multi-sensor fusion for free-living physical activity assessment.

    PubMed

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty S

    2011-01-01

    This paper presents a sensor fusion method for assessing physical activity (PA) of human subjects, based on the support vector machines (SVMs). Specifically, acceleration and ventilation measured by a wearable multi-sensor device on 50 test subjects performing 13 types of activities of varying intensities are analyzed, from which the activity types and related energy expenditures are derived. The result shows that the method correctly recognized the 13 activity types 84.7% of the time, which is 26% higher than using a hip accelerometer alone. Also, the method predicted the associated energy expenditure with a root mean square error of 0.43 METs, 43% lower than using a hip accelerometer alone. Furthermore, the fusion method was effective in reducing the subject-to-subject variability (standard deviation of recognition accuracies across subjects) in activity recognition, especially when data from the ventilation sensor was added to the fusion model. These results demonstrate that the multi-sensor fusion technique presented is more effective in assessing activities of varying intensities than the traditional accelerometer-alone based methods.

  10. Multisensor data fusion for physical activity assessment.

    PubMed

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John W; Freedson, Patty S

    2012-03-01

    This paper presents a sensor fusion method for assessing physical activity (PA) of human subjects, based on support vector machines (SVMs). Specifically, acceleration and ventilation measured by a wearable multisensor device on 50 test subjects performing 13 types of activities of varying intensities are analyzed, from which activity type and energy expenditure are derived. The results show that the method correctly recognized the 13 activity types 88.1% of the time, which is 12.3% higher than using a hip accelerometer alone. Also, the method predicted energy expenditure with a root mean square error of 0.42 METs, 22.2% lower than using a hip accelerometer alone. Furthermore, the fusion method was effective in reducing the subject-to-subject variability (standard deviation of recognition accuracies across subjects) in activity recognition, especially when data from the ventilation sensor were added to the fusion model. These results demonstrate that the multisensor fusion technique presented is more effective in identifying activity type and energy expenditure than the traditional accelerometer-alone-based methods.

  11. Breath analysis system for early detection of lung diseases based on multi-sensor array

    NASA Astrophysics Data System (ADS)

    Jeon, Jin-Young; Yu, Joon-Boo; Shin, Jeong-Suk; Byun, Hyung-Gi; Lim, Jeong-Ok

    2013-05-01

    Expiratory breath contains various VOCs(Volatile Organic Compounds) produced from the human. When a certain disease exists, the exhalation has specific VOCs which may be generated from diseases. Many researchers have been actively working to find different types of biomarkers which are characteristic for particular diseases. Research regarding the identification of specific diseases from exhalation is still in progress. The aim of this research is to implement early detection of lung disease such as lung cancer and COPD(Chronic Obstructive Pulmonary Disease), which was nominated on the 6th of domestic death rate in 2010, based on multi-sensor array system. The system has been used to acquire sampled expiratory gases data and PCA(Principle Component Analysis) technique was applied to analyze signals from multi-sensor array. Throughout the experimental trials, a clearly distinguishable difference between lung disease patients and healthy controls was found from the measurement and analysis of their respective expiratory gases.

  12. Toward a Coherent Detailed Evaluation of Aerosol Data Products from Multiple Satellite Sensors

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Petrenko, Maksym; Leptoukh, Gregory

    2011-01-01

    Atmospheric aerosols represent one of the greatest uncertainties in climate research. Although satellite-based aerosol retrieval has practically become routine, especially during the last decade, there is often disagreement between similar aerosol parameters retrieved from different sensors, leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. As long as there is no consensus and the inconsistencies are not well characterized and understood, there will be no way of developing reliable climate data records from satellite aerosol measurements. Fortunately, the most globally representative well-calibrated ground-based aerosol measurements corresponding to the satellite-retrieved products are available from the Aerosol Robotic Network (AERONET). To adequately utilize the advantages offered by this vital resource, an online Multi-sensor Aerosol Products Sampling System (MAPSS) was recently developed. The aim of MAPSS is to facilitate detailed comparative analysis of satellite aerosol measurements from different sensors (Terra-MODIS, Aqua-MODIS, TerraMISR, Aura-OMI, Parasol-POLDER, and Calipso-CALIOP) based on the collocation of these data products over AERONET stations. In this presentation, we will describe the strategy of the MASS system, its potential advantages for the aerosol community, and the preliminary results of an integrated comparative uncertainly analysis of aerosol products from multiple satellite sensors.

  13. Multisensor Image Analysis System

    DTIC Science & Technology

    1993-04-15

    AD-A263 679 II Uli! 91 Multisensor Image Analysis System Final Report Authors. Dr. G. M. Flachs Dr. Michael Giles Dr. Jay Jordan Dr. Eric...or decision, unless so designated by other documentation. 93-09739 *>ft s n~. now illlllM3lMVf Multisensor Image Analysis System Final...Multisensor Image Analysis System 3. REPORT TYPE AND DATES COVERED FINAL: LQj&tt-Z JZOfVL 5. FUNDING NUMBERS 93 > 6. AUTHOR(S) Drs. Gerald

  14. Semiotic foundation for multisensor-multilook fusion

    NASA Astrophysics Data System (ADS)

    Myler, Harley R.

    1998-07-01

    This paper explores the concept of an application of semiotic principles to the design of a multisensor-multilook fusion system. Semiotics is an approach to analysis that attempts to process media in a united way using qualitative methods as opposed to quantitative. The term semiotic refers to signs, or signatory data that encapsulates information. Semiotic analysis involves the extraction of signs from information sources and the subsequent processing of the signs into meaningful interpretations of the information content of the source. The multisensor fusion problem predicated on a semiotic system structure and incorporating semiotic analysis techniques is explored and the design for a multisensor system as an information fusion system is explored. Semiotic analysis opens the possibility of using non-traditional sensor sources and modalities in the fusion process, such as verbal and textual intelligence derived from human observers. Examples of how multisensor/multimodality data might be analyzed semiotically is shown and discussion on how a semiotic system for multisensor fusion could be realized is outlined. The architecture of a semiotic multisensor fusion processor that can accept situational awareness data is described, although an implementation has not as yet been constructed.

  15. Multi-sensor analysis of urban ecosystems

    USGS Publications Warehouse

    Gallo, Kevin P.; Ji, Lei

    2004-01-01

    This study examines the synthesis of multiple space-based sensors to characterize the urban environment Single scene data (e.g., ASTER visible and near-IR surface reflectance, and land surface temperature data), multi-temporal data (e.g., one year of 16-day MODIS and AVHRR vegetation index data), and DMSP-OLS nighttime light data acquired in the early 1990s and 2000 were evaluated for urban ecosystem analysis. The advantages of a multi-sensor approach for the analysis of urban ecosystem processes are discussed.

  16. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  17. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  18. Multisensor Parallel Largest Ellipsoid Distributed Data Fusion with Unknown Cross-Covariances

    PubMed Central

    Liu, Baoyu; Zhan, Xingqun; Zhu, Zheng H.

    2017-01-01

    As the largest ellipsoid (LE) data fusion algorithm can only be applied to two-sensor system, in this contribution, parallel fusion structure is proposed to introduce the LE algorithm into a multisensor system with unknown cross-covariances, and three parallel fusion structures based on different estimate pairing methods are presented and analyzed. In order to assess the influence of fusion structure on fusion performance, two fusion performance assessment parameters are defined as Fusion Distance and Fusion Index. Moreover, the formula for calculating the upper bounds of actual fused error covariances of the presented multisensor LE fusers is also provided. Demonstrated with simulation examples, the Fusion Index indicates fuser’s actual fused accuracy and its sensitivity to the sensor orders, as well as its robustness to the accuracy of newly added sensors. Compared to the LE fuser with sequential structure, the LE fusers with proposed parallel structures not only significantly improve their properties in these aspects, but also embrace better performances in consistency and computation efficiency. The presented multisensor LE fusers generally have better accuracies than covariance intersection (CI) fusion algorithm and are consistent when the local estimates are weakly correlated. PMID:28661442

  19. Medical decision-making inspired from aerospace multisensor data fusion concepts.

    PubMed

    Pombo, Nuno; Bousson, Kouamana; Araújo, Pedro; Viana, Joaquim

    2015-01-01

    In recent years, Internet-delivered treatments have been largely used for pain monitoring, offering healthcare professionals and patients the ability to interact anywhere and at any time. Electronic diaries have been increasingly adopted as the preferred methodology to collect data related to pain intensity and symptoms, replacing traditional pen-and-paper diaries. This article presents a multisensor data fusion methodology based on the capabilities provided by aerospace systems to evaluate the effects of electronic and pen-and-paper diaries on pain. We examined English-language studies of randomized controlled trials that use computerized systems and the Internet to collect data about chronic pain complaints. These studies were obtained from three data sources: BioMed Central, PubMed Central and ScienceDirect from the year 2000 until 30 June 2012. Based on comparisons of the reported pain intensity collected during pre- and post-treatment in both the control and intervention groups, the proposed multisensor data fusion model revealed that the benefits of technology and pen-and-paper are qualitatively equivalent [Formula: see text]. We conclude that the proposed model is suitable, intelligible, easy to implement, time efficient and resource efficient.

  20. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph

    PubMed Central

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-01-01

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method. PMID:28335570

  1. A Novel Energy-Efficient Multi-Sensor Fusion Wake-Up Control Strategy Based on a Biomimetic Infectious-Immune Mechanism for Target Tracking.

    PubMed

    Zhou, Jie; Liang, Yan; Shen, Qiang; Feng, Xiaoxue; Pan, Quan

    2018-04-18

    A biomimetic distributed infection-immunity model (BDIIM), inspired by the immune mechanism of an infected organism, is proposed in order to achieve a high-efficiency wake-up control strategy based on multi-sensor fusion for target tracking. The resultant BDIIM consists of six sub-processes reflecting the infection-immunity mechanism: occurrence probabilities of direct-infection (DI) and cross-infection (CI), immunity/immune-deficiency of DI and CI, pathogen amount of DI and CI, immune cell production, immune memory, and pathogen accumulation under immunity state. Furthermore, a corresponding relationship between the BDIIM and sensor wake-up control is established to form the collaborative wake-up method. Finally, joint surveillance and target tracking are formulated in the simulation, in which we show that the energy cost and position tracking error are reduced to 50.8% and 78.9%, respectively. Effectiveness of the proposed BDIIM algorithm is shown, and this model is expected to have a significant role in guiding the performance improvement of multi-sensor networks.

  2. An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph.

    PubMed

    Zeng, Qinghua; Chen, Weina; Liu, Jianye; Wang, Huizhe

    2017-03-21

    An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method.

  3. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method.

    PubMed

    Deng, Xinyang; Jiang, Wen

    2017-09-12

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.

  4. Fuzzy Risk Evaluation in Failure Mode and Effects Analysis Using a D Numbers Based Multi-Sensor Information Fusion Method

    PubMed Central

    Deng, Xinyang

    2017-01-01

    Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model. PMID:28895905

  5. Advances in multi-sensor data fusion: algorithms and applications.

    PubMed

    Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying

    2009-01-01

    With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.

  6. A Passive Wireless Multi-Sensor SAW Technology Device and System Perspectives

    PubMed Central

    Malocha, Donald C.; Gallagher, Mark; Fisher, Brian; Humphries, James; Gallagher, Daniel; Kozlovski, Nikolai

    2013-01-01

    This paper will discuss a SAW passive, wireless multi-sensor system under development by our group for the past several years. The device focus is on orthogonal frequency coded (OFC) SAW sensors, which use both frequency diversity and pulse position reflectors to encode the device ID and will be briefly contrasted to other embodiments. A synchronous correlator transceiver is used for the hardware and post processing and correlation techniques of the received signal to extract the sensor information will be presented. Critical device and system parameters addressed include encoding, operational range, SAW device parameters, post-processing, and antenna-SAW device integration. A fully developed 915 MHz OFC SAW multi-sensor system is used to show experimental results. The system is based on a software radio approach that provides great flexibility for future enhancements and diverse sensor applications. Several different sensor types using the OFC SAW platform are shown. PMID:23666124

  7. Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems.

    PubMed

    Gao, Lei; Bourke, A K; Nelson, John

    2014-06-01

    Physical activity has a positive impact on people's well-being and it had been shown to decrease the occurrence of chronic diseases in the older adult population. To date, a substantial amount of research studies exist, which focus on activity recognition using inertial sensors. Many of these studies adopt a single sensor approach and focus on proposing novel features combined with complex classifiers to improve the overall recognition accuracy. In addition, the implementation of the advanced feature extraction algorithms and the complex classifiers exceed the computing ability of most current wearable sensor platforms. This paper proposes a method to adopt multiple sensors on distributed body locations to overcome this problem. The objective of the proposed system is to achieve higher recognition accuracy with "light-weight" signal processing algorithms, which run on a distributed computing based sensor system comprised of computationally efficient nodes. For analysing and evaluating the multi-sensor system, eight subjects were recruited to perform eight normal scripted activities in different life scenarios, each repeated three times. Thus a total of 192 activities were recorded resulting in 864 separate annotated activity states. The methods for designing such a multi-sensor system required consideration of the following: signal pre-processing algorithms, sampling rate, feature selection and classifier selection. Each has been investigated and the most appropriate approach is selected to achieve a trade-off between recognition accuracy and computing execution time. A comparison of six different systems, which employ single or multiple sensors, is presented. The experimental results illustrate that the proposed multi-sensor system can achieve an overall recognition accuracy of 96.4% by adopting the mean and variance features, using the Decision Tree classifier. The results demonstrate that elaborate classifiers and feature sets are not required to achieve high recognition accuracies on a multi-sensor system. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Coherent Evaluation of Aerosol Data Products from Multiple Satellite Sensors

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles

    2011-01-01

    Aerosol retrieval from satellite has practically become routine, especially during the last decade. However, there is often disagreement between similar aerosol parameters retrieved from different sensors, thereby leaving users confused as to which sensors to trust for answering important science questions about the distribution, properties, and impacts of aerosols. As long as there is no consensus, and the inconsistencies are not well characterized and understood, there will be no way of developing reliable model inputs and climate data records from satellite aerosol measurements. Fortunately, the Aerosol Robotic Network (AERONET) is providing well-calibrated globally representative ground-based aerosol measurements corresponding to the satellite-retrieved products. Through a recently developed web-based Multi-sensor Aerosol Products Sampling System (MAPSS), we are utilizing the advantages offered by collocated AERONET and satellite products to characterize and evaluate aerosol retrieval from multiple sensors. Indeed, MAPSS and its companion statistical tool AeroStat are facilitating detailed comparative uncertainty analysis of satellite aerosol measurements from Terra-MODIS, Aqua-MODIS, Terra-MISR, Aura-OMI, Parasol-POLDER, and Calipso-CALIOP. In this presentation, we will describe the strategy of the MAPSS system, its potential advantages for the aerosol community, and the preliminary results of an integrated comparative uncertainly analysis of aerosol products from multiple satellite sensors.

  9. A Low Power, Parallel Wearable Multi-Sensor System for Human Activity Evaluation.

    PubMed

    Li, Yuecheng; Jia, Wenyan; Yu, Tianjian; Luan, Bo; Mao, Zhi-Hong; Zhang, Hong; Sun, Mingui

    2015-04-01

    In this paper, the design of a low power heterogeneous wearable multi-sensor system, built with Zynq System-on-Chip (SoC), for human activity evaluation is presented. The powerful data processing capability and flexibility of this SoC represent significant improvements over our previous ARM based system designs. The new system captures and compresses multiple color images and sensor data simultaneously. Several strategies are adopted to minimize power consumption. Our wearable system provides a new tool for the evaluation of human activity, including diet, physical activity and lifestyle.

  10. MATSurv: multisensor air traffic surveillance system

    NASA Astrophysics Data System (ADS)

    Yeddanapudi, Murali; Bar-Shalom, Yaakov; Pattipati, Krishna R.; Gassner, Richard R.

    1995-09-01

    This paper deals with the design and implementation of MATSurv 1--an experimental Multisensor Air Traffic Surveillance system. The proposed system consists of a Kalman filter based state estimator used in conjunction with a 2D sliding window assignment algorithm. Real data from two FAA radars is used to evaluate the performance of this algorithm. The results indicate that the proposed algorithm provides a superior classification of the measurements into tracks (i.e., the most likely aircraft trajectories) when compared to the aircraft trajectories obtained using the measurement IDs (squawk or IFF code).

  11. An SOI CMOS-Based Multi-Sensor MEMS Chip for Fluidic Applications.

    PubMed

    Mansoor, Mohtashim; Haneef, Ibraheem; Akhtar, Suhail; Rafiq, Muhammad Aftab; De Luca, Andrea; Ali, Syed Zeeshan; Udrea, Florin

    2016-11-04

    An SOI CMOS multi-sensor MEMS chip, which can simultaneously measure temperature, pressure and flow rate, has been reported. The multi-sensor chip has been designed keeping in view the requirements of researchers interested in experimental fluid dynamics. The chip contains ten thermodiodes (temperature sensors), a piezoresistive-type pressure sensor and nine hot film-based flow rate sensors fabricated within the oxide layer of the SOI wafers. The silicon dioxide layers with embedded sensors are relieved from the substrate as membranes with the help of a single DRIE step after chip fabrication from a commercial CMOS foundry. Very dense sensor packing per unit area of the chip has been enabled by using technologies/processes like SOI, CMOS and DRIE. Independent apparatuses were used for the characterization of each sensor. With a drive current of 10 µA-0.1 µA, the thermodiodes exhibited sensitivities of 1.41 mV/°C-1.79 mV/°C in the range 20-300 °C. The sensitivity of the pressure sensor was 0.0686 mV/(V excit kPa) with a non-linearity of 0.25% between 0 and 69 kPa above ambient pressure. Packaged in a micro-channel, the flow rate sensor has a linearized sensitivity of 17.3 mV/(L/min) -0.1 in the tested range of 0-4.7 L/min. The multi-sensor chip can be used for simultaneous measurement of fluid pressure, temperature and flow rate in fluidic experiments and aerospace/automotive/biomedical/process industries.

  12. An SOI CMOS-Based Multi-Sensor MEMS Chip for Fluidic Applications †

    PubMed Central

    Mansoor, Mohtashim; Haneef, Ibraheem; Akhtar, Suhail; Rafiq, Muhammad Aftab; De Luca, Andrea; Ali, Syed Zeeshan; Udrea, Florin

    2016-01-01

    An SOI CMOS multi-sensor MEMS chip, which can simultaneously measure temperature, pressure and flow rate, has been reported. The multi-sensor chip has been designed keeping in view the requirements of researchers interested in experimental fluid dynamics. The chip contains ten thermodiodes (temperature sensors), a piezoresistive-type pressure sensor and nine hot film-based flow rate sensors fabricated within the oxide layer of the SOI wafers. The silicon dioxide layers with embedded sensors are relieved from the substrate as membranes with the help of a single DRIE step after chip fabrication from a commercial CMOS foundry. Very dense sensor packing per unit area of the chip has been enabled by using technologies/processes like SOI, CMOS and DRIE. Independent apparatuses were used for the characterization of each sensor. With a drive current of 10 µA–0.1 µA, the thermodiodes exhibited sensitivities of 1.41 mV/°C–1.79 mV/°C in the range 20–300 °C. The sensitivity of the pressure sensor was 0.0686 mV/(Vexcit kPa) with a non-linearity of 0.25% between 0 and 69 kPa above ambient pressure. Packaged in a micro-channel, the flow rate sensor has a linearized sensitivity of 17.3 mV/(L/min)−0.1 in the tested range of 0–4.7 L/min. The multi-sensor chip can be used for simultaneous measurement of fluid pressure, temperature and flow rate in fluidic experiments and aerospace/automotive/biomedical/process industries. PMID:27827904

  13. An Adaptive Multi-Sensor Data Fusion Method Based on Deep Convolutional Neural Networks for Fault Diagnosis of Planetary Gearbox

    PubMed Central

    Jing, Luyang; Wang, Taiyong; Zhao, Ming; Wang, Peng

    2017-01-01

    A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment. PMID:28230767

  14. Introducing Multisensor Satellite Radiance-Based Evaluation for Regional Earth System Modeling

    NASA Technical Reports Server (NTRS)

    Matsui, T.; Santanello, J.; Shi, J. J.; Tao, W.-K.; Wu, D.; Peters-Lidard, C.; Kemp, E.; Chin, M.; Starr, D.; Sekiguchi, M.; hide

    2014-01-01

    Earth System modeling has become more complex, and its evaluation using satellite data has also become more difficult due to model and data diversity. Therefore, the fundamental methodology of using satellite direct measurements with instrumental simulators should be addressed especially for modeling community members lacking a solid background of radiative transfer and scattering theory. This manuscript introduces principles of multisatellite, multisensor radiance-based evaluation methods for a fully coupled regional Earth System model: NASA-Unified Weather Research and Forecasting (NU-WRF) model. We use a NU-WRF case study simulation over West Africa as an example of evaluating aerosol-cloud-precipitation-land processes with various satellite observations. NU-WRF-simulated geophysical parameters are converted to the satellite-observable raw radiance and backscatter under nearly consistent physics assumptions via the multisensor satellite simulator, the Goddard Satellite Data Simulator Unit. We present varied examples of simple yet robust methods that characterize forecast errors and model physics biases through the spatial and statistical interpretation of various satellite raw signals: infrared brightness temperature (Tb) for surface skin temperature and cloud top temperature, microwave Tb for precipitation ice and surface flooding, and radar and lidar backscatter for aerosol-cloud profiling simultaneously. Because raw satellite signals integrate many sources of geophysical information, we demonstrate user-defined thresholds and a simple statistical process to facilitate evaluations, including the infrared-microwave-based cloud types and lidar/radar-based profile classifications.

  15. Case-Based Multi-Sensor Intrusion Detection

    NASA Astrophysics Data System (ADS)

    Schwartz, Daniel G.; Long, Jidong

    2009-08-01

    Multi-sensor intrusion detection systems (IDSs) combine the alerts raised by individual IDSs and possibly other kinds of devices such as firewalls and antivirus software. A critical issue in building a multi-sensor IDS is alert-correlation, i.e., determining which alerts are caused by the same attack. This paper explores a novel approach to alert correlation using case-based reasoning (CBR). Each case in the CBR system's library contains a pattern of alerts raised by some known attack type, together with the identity of the attack. Then during run time, the alert streams gleaned from the sensors are compared with the patterns in the cases, and a match indicates that the attack described by that case has occurred. For this purpose the design of a fast and accurate matching algorithm is imperative. Two such algorithms were explored: (i) the well-known Hungarian algorithm, and (ii) an order-preserving matching of our own device. Tests were conducted using the DARPA Grand Challenge Problem attack simulator. These showed that the both matching algorithms are effective in detecting attacks; but the Hungarian algorithm is inefficient; whereas the order-preserving one is very efficient, in fact runs in linear time.

  16. Multisensor robotic system for autonomous space maintenance and repair

    NASA Technical Reports Server (NTRS)

    Abidi, M. A.; Green, W. L.; Chandra, T.; Spears, J.

    1988-01-01

    The feasibility of realistic autonomous space manipulation tasks using multisensory information is demonstrated. The system is capable of acquiring, integrating, and interpreting multisensory data to locate, mate, and demate a Fluid Interchange System (FIS) and a Module Interchange System (MIS). In both cases, autonomous location of a guiding light target, mating, and demating of the system are performed. Implemented visio-driven techniques are used to determine the arbitrary two-dimensional position and orientation of the mating elements as well as the arbitrary three-dimensional position and orientation of the light targets. A force/torque sensor continuously monitors the six components of force and torque exerted on the end-effector. Both FIS and MIS experiments were successfully accomplished on mock-ups built for this purpose. The method is immune to variations in the ambient light, in particular because of the 90-minute day-night shift in space.

  17. A parallel implementation of a multisensor feature-based range-estimation method

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond E.; Sridhar, Banavar

    1993-01-01

    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer.

  18. Real-Time Identification of Smoldering and Flaming Combustion Phases in Forest Using a Wireless Sensor Network-Based Multi-Sensor System and Artificial Neural Network

    PubMed Central

    Yan, Xiaofei; Cheng, Hong; Zhao, Yandong; Yu, Wenhua; Huang, Huan; Zheng, Xiaoliang

    2016-01-01

    Diverse sensing techniques have been developed and combined with machine learning method for forest fire detection, but none of them referred to identifying smoldering and flaming combustion phases. This study attempts to real-time identify different combustion phases using a developed wireless sensor network (WSN)-based multi-sensor system and artificial neural network (ANN). Sensors (CO, CO2, smoke, air temperature and relative humidity) were integrated into one node of WSN. An experiment was conducted using burning materials from residual of forest to test responses of each node under no, smoldering-dominated and flaming-dominated combustion conditions. The results showed that the five sensors have reasonable responses to artificial forest fire. To reduce cost of the nodes, smoke, CO2 and temperature sensors were chiefly selected through correlation analysis. For achieving higher identification rate, an ANN model was built and trained with inputs of four sensor groups: smoke; smoke and CO2; smoke and temperature; smoke, CO2 and temperature. The model test results showed that multi-sensor input yielded higher predicting accuracy (≥82.5%) than single-sensor input (50.9%–92.5%). Based on these, it is possible to reduce the cost with a relatively high fire identification rate and potential application of the system can be tested in future under real forest condition. PMID:27527175

  19. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  20. Real-Time Identification of Smoldering and Flaming Combustion Phases in Forest Using a Wireless Sensor Network-Based Multi-Sensor System and Artificial Neural Network.

    PubMed

    Yan, Xiaofei; Cheng, Hong; Zhao, Yandong; Yu, Wenhua; Huang, Huan; Zheng, Xiaoliang

    2016-08-04

    Diverse sensing techniques have been developed and combined with machine learning method for forest fire detection, but none of them referred to identifying smoldering and flaming combustion phases. This study attempts to real-time identify different combustion phases using a developed wireless sensor network (WSN)-based multi-sensor system and artificial neural network (ANN). Sensors (CO, CO₂, smoke, air temperature and relative humidity) were integrated into one node of WSN. An experiment was conducted using burning materials from residual of forest to test responses of each node under no, smoldering-dominated and flaming-dominated combustion conditions. The results showed that the five sensors have reasonable responses to artificial forest fire. To reduce cost of the nodes, smoke, CO₂ and temperature sensors were chiefly selected through correlation analysis. For achieving higher identification rate, an ANN model was built and trained with inputs of four sensor groups: smoke; smoke and CO₂; smoke and temperature; smoke, CO₂ and temperature. The model test results showed that multi-sensor input yielded higher predicting accuracy (≥82.5%) than single-sensor input (50.9%-92.5%). Based on these, it is possible to reduce the cost with a relatively high fire identification rate and potential application of the system can be tested in future under real forest condition.

  1. A New Multi-Sensor Track Fusion Architecture for Multi-Sensor Information Integration

    DTIC Science & Technology

    2004-09-01

    NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION ...NAME(S) AND ADDRESS(ES) Lockheed Martin Aeronautical Systems Company,Marietta,GA,3063 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...tracking process and degrades the track accuracy. ARCHITECHTURE OF MULTI-SENSOR TRACK FUSION MODEL The Alpha

  2. Application of adaptive optics in complicated and integrated spatial multisensor system and its measurement analysis

    NASA Astrophysics Data System (ADS)

    Ding, Quanxin; Guo, Chunjie; Cai, Meng; Liu, Hua

    2007-12-01

    Adaptive Optics Expand System is a kind of new concept spatial equipment, which concerns system, cybernetics and informatics deeply, and is key way to improve advanced sensors ability. Traditional Zernike Phase Contrast Method is developed, and Accelerated High-level Phase Contrast Theory is established. Integration theory and mathematical simulation is achieved. Such Equipment, which is based on some crucial components, such as, core optical system, multi mode wavefront sensor and so on, is established for AOES advantageous configuration and global design. Studies on Complicated Spatial Multisensor System Integratation and measurement Analysis including error analysis are carried out.

  3. ATR architecture for multisensor fusion

    NASA Astrophysics Data System (ADS)

    Hamilton, Mark K.; Kipp, Teresa A.

    1996-06-01

    The work of the U.S. Army Research Laboratory (ARL) in the area of algorithms for the identification of static military targets in single-frame electro-optical (EO) imagery has demonstrated great potential in platform-based automatic target identification (ATI). In this case, the term identification is used to mean being able to tell the difference between two military vehicles -- e.g., the M60 from the T72. ARL's work includes not only single-sensor forward-looking infrared (FLIR) ATI algorithms, but also multi-sensor ATI algorithms. We briefly discuss ARL's hybrid model-based/data-learning strategy for ATI, which represents a significant step forward in ATI algorithm design. For example, in the case of single sensor FLIR it allows the human algorithm designer to build directly into the algorithm knowledge that can be adequately modeled at this time, such as the target geometry which directly translates into the target silhouette in the FLIR realm. In addition, it allows structure that is not currently well understood (i.e., adequately modeled) to be incorporated through automated data-learning algorithms, which in a FLIR directly translates into an internal thermal target structure signature. This paper shows the direct applicability of this strategy to both the single-sensor FLIR as well as the multi-sensor FLIR and laser radar.

  4. Calibrating a novel multi-sensor physical activity measurement system.

    PubMed

    John, D; Liu, S; Sasaki, J E; Howe, C A; Staudenmayer, J; Gao, R X; Freedson, P S

    2011-09-01

    Advancing the field of physical activity (PA) monitoring requires the development of innovative multi-sensor measurement systems that are feasible in the free-living environment. The use of novel analytical techniques to combine and process these multiple sensor signals is equally important. This paper describes a novel multi-sensor 'integrated PA measurement system' (IMS), the lab-based methodology used to calibrate the IMS, techniques used to predict multiple variables from the sensor signals, and proposes design changes to improve the feasibility of deploying the IMS in the free-living environment. The IMS consists of hip and wrist acceleration sensors, two piezoelectric respiration sensors on the torso, and an ultraviolet radiation sensor to obtain contextual information (indoors versus outdoors) of PA. During lab-based calibration of the IMS, data were collected on participants performing a PA routine consisting of seven different ambulatory and free-living activities while wearing a portable metabolic unit (criterion measure) and the IMS. Data analyses on the first 50 adult participants are presented. These analyses were used to determine if the IMS can be used to predict the variables of interest. Finally, physical modifications for the IMS that could enhance the feasibility of free-living use are proposed and refinement of the prediction techniques is discussed.

  5. An enhanced inertial navigation system based on a low-cost IMU and laser scanner

    NASA Astrophysics Data System (ADS)

    Kim, Hyung-Soon; Baeg, Seung-Ho; Yang, Kwang-Woong; Cho, Kuk; Park, Sangdeok

    2012-06-01

    This paper describes an enhanced fusion method for an Inertial Navigation System (INS) based on a 3-axis accelerometer sensor, a 3-axis gyroscope sensor and a laser scanner. In GPS-denied environments, indoor or dense forests, a pure INS odometry is available for estimating the trajectory of a human or robot. However it has a critical implementation problem: a drift error of velocity, position and heading angles. Commonly the problem can be solved by fusing visual landmarks, a magnetometer or radio beacons. These methods are not robust in diverse environments: darkness, fog or sunlight, an unstable magnetic field and an environmental obstacle. We propose to overcome the drift problem using an Iterative Closest Point (ICP) scan matching algorithm with a laser scanner. This system consists of three parts. The first is the INS. It estimates attitude, velocity, position based on a 6-axis Inertial Measurement Unit (IMU) with both 'Heuristic Reduction of Gyro Drift' (HRGD) and 'Heuristic Reduction of Velocity Drift' (HRVD) methods. A frame-to-frame ICP matching algorithm for estimating position and attitude by laser scan data is the second. The third is an extended kalman filter method for multi-sensor data fusing: INS and Laser Range Finder (LRF). The proposed method is simple and robust in diverse environments, so we could reduce the drift error efficiently. We confirm the result comparing an odometry of the experimental result with ICP and LRF aided-INS in a long corridor.

  6. Spatial Distribution of Accuracy of Aerosol Retrievals from Multiple Satellite Sensors

    NASA Technical Reports Server (NTRS)

    Petrenko, Maksym; Ichoku, Charles

    2012-01-01

    Remote sensing of aerosols from space has been a subject of extensive research, with multiple sensors retrieving aerosol properties globally on a daily or weekly basis. The diverse algorithms used for these retrievals operate on different types of reflected signals based on different assumptions about the underlying physical phenomena. Depending on the actual retrieval conditions and especially on the geographical location of the sensed aerosol parcels, the combination of these factors might be advantageous for one or more of the sensors and unfavorable for others, resulting in disagreements between similar aerosol parameters retrieved from different sensors. In this presentation, we will demonstrate the use of the Multi-sensor Aerosol Products Sampling System (MAPSS) to analyze and intercompare aerosol retrievals from multiple spaceborne sensors, including MODIS (on Terra and Aqua), MISR, OMI, POLDER, CALIOP, and SeaWiFS. Based on this intercomparison, we are determining geographical locations where these products provide the greatest accuracy of the retrievals and identifying the products that are the most suitable for retrieval at these locations. The analyses are performed by comparing quality-screened satellite aerosol products to available collocated ground-based aerosol observations from the Aerosol Robotic Network (AERONET) stations, during the period of 2006-2010 when all the satellite sensors were operating concurrently. Furthermore, we will discuss results of a statistical approach that is applied to the collocated data to detect and remove potential data outliers that can bias the results of the analysis.

  7. Multi-Sensor Based Online Attitude Estimation and Stability Measurement of Articulated Heavy Vehicles.

    PubMed

    Zhu, Qingyuan; Xiao, Chunsheng; Hu, Huosheng; Liu, Yuanhui; Wu, Jinjin

    2018-01-13

    Articulated wheel loaders used in the construction industry are heavy vehicles and have poor stability and a high rate of accidents because of the unpredictable changes of their body posture, mass and centroid position in complex operation environments. This paper presents a novel distributed multi-sensor system for real-time attitude estimation and stability measurement of articulated wheel loaders to improve their safety and stability. Four attitude and heading reference systems (AHRS) are constructed using micro-electro-mechanical system (MEMS) sensors, and installed on the front body, rear body, rear axis and boom of an articulated wheel loader to detect its attitude. A complementary filtering algorithm is deployed for sensor data fusion in the system so that steady state margin angle (SSMA) can be measured in real time and used as the judge index of rollover stability. Experiments are conducted on a prototype wheel loader, and results show that the proposed multi-sensor system is able to detect potential unstable states of an articulated wheel loader in real-time and with high accuracy.

  8. Multi-Sensor Based Online Attitude Estimation and Stability Measurement of Articulated Heavy Vehicles

    PubMed Central

    Xiao, Chunsheng; Liu, Yuanhui; Wu, Jinjin

    2018-01-01

    Articulated wheel loaders used in the construction industry are heavy vehicles and have poor stability and a high rate of accidents because of the unpredictable changes of their body posture, mass and centroid position in complex operation environments. This paper presents a novel distributed multi-sensor system for real-time attitude estimation and stability measurement of articulated wheel loaders to improve their safety and stability. Four attitude and heading reference systems (AHRS) are constructed using micro-electro-mechanical system (MEMS) sensors, and installed on the front body, rear body, rear axis and boom of an articulated wheel loader to detect its attitude. A complementary filtering algorithm is deployed for sensor data fusion in the system so that steady state margin angle (SSMA) can be measured in real time and used as the judge index of rollover stability. Experiments are conducted on a prototype wheel loader, and results show that the proposed multi-sensor system is able to detect potential unstable states of an articulated wheel loader in real-time and with high accuracy. PMID:29342850

  9. Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery

    NASA Astrophysics Data System (ADS)

    Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.

    2017-12-01

    Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being organized to collect distributed cloud data sets suitable for MODIS-CERES cloud radiation science and solar forecasting algorithm development. A low-cost and robust sensor design suitable for large scale fabrication and long term deployment has been developed during the project prototyping phase.

  10. Multisensor systems today and tomorrow: Machine control, diagnosis and thermal compensation

    NASA Astrophysics Data System (ADS)

    Nunzio, D'Addea

    2000-05-01

    Multisensor techniques that deal with control of tribology test rig and with diagnosis and thermal error compensation of machine tools are the starting point for some consideration about the use of these techniques as in fuzzy and neural net systems. The author comes to conclusion that anticipatory systems and multisensor techniques will have in the next future a great improvement and a great development mainly in the thermal error compensation of machine tools.

  11. Maritime Aerosol Network optical depth measurements and comparison with satellite retrievals from various different sensors

    NASA Astrophysics Data System (ADS)

    Smirnov, Alexander; Petrenko, Maksym; Ichoku, Charles; Holben, Brent N.

    2017-10-01

    The paper reports on the current status of the Maritime Aerosol Network (MAN) which is a component of the Aerosol Robotic Network (AERONET). A public domain web-based data archive dedicated to MAN activity can be found at https://aeronet.gsfc.nasa.gov/new_web/maritime_aerosol_network.html . Since 2006 over 450 cruises were completed and the data archive consists of more than 6000 measurement days. In this work, we present MAN observations collocated with MODIS Terra, MODIS Aqua, MISR, POLDER, SeaWIFS, OMI, and CALIOP spaceborne aerosol products using a modified version of the Multi-Sensor Aerosol Products Sampling System (MAPSS) framework. Because of different spatio-temporal characteristics of the analyzed products, the number of MAN data points collocated with spaceborne retrievals varied between 1500 matchups for MODIS to 39 for CALIOP (as of August 2016). Despite these unavoidable sampling biases, latitudinal dependencies of AOD differences for all satellite sensors, except for SeaWIFS and POLDER, showed positive biases against ground truth (i.e. MAN) in the southern latitudes (<50° S), and substantial scatter in the Northern Atlantic "dust belt" (5°-15° N). Our analysis did not intend to determine whether satellite retrievals are within claimed uncertainty boundaries, but rather show where bias exists and corrections are needed.

  12. Intelligent agents: adaptation of autonomous bimodal microsystems

    NASA Astrophysics Data System (ADS)

    Smith, Patrice; Terry, Theodore B.

    2014-03-01

    Autonomous bimodal microsystems exhibiting survivability behaviors and characteristics are able to adapt dynamically in any given environment. Equipped with a background blending exoskeleton it will have the capability to stealthily detect and observe a self-chosen viewing area while exercising some measurable form of selfpreservation by either flying or crawling away from a potential adversary. The robotic agent in this capacity activates a walk-fly algorithm, which uses a built in multi-sensor processing and navigation subsystem or algorithm for visual guidance and best walk-fly path trajectory to evade capture or annihilation. The research detailed in this paper describes the theoretical walk-fly algorithm, which broadens the scope of spatial and temporal learning, locomotion, and navigational performances based on optical flow signals necessary for flight dynamics and walking stabilities. By observing a fly's travel and avoidance behaviors; and, understanding the reverse bioengineering research efforts of others, we were able to conceptualize an algorithm, which works in conjunction with decisionmaking functions, sensory processing, and sensorimotor integration. Our findings suggest that this highly complex decentralized algorithm promotes inflight or terrain travel mobile stability which is highly suitable for nonaggressive micro platforms supporting search and rescue (SAR), and chemical and explosive detection (CED) purposes; a necessity in turbulent, non-violent structured or unstructured environments.

  13. Direct Aerosol Radiative Forcing from Combined A-Train Observations - Preliminary Comparisons with AeroCom Models and Pathways to Observationally Based All-sky Estimates

    NASA Astrophysics Data System (ADS)

    Redemann, J.; Livingston, J. M.; Shinozuka, Y.; Kacenelenbogen, M. S.; Russell, P. B.; LeBlanc, S. E.; Vaughan, M.; Ferrare, R. A.; Hostetler, C. A.; Rogers, R. R.; Burton, S. P.; Torres, O.; Remer, L. A.; Stier, P.; Schutgens, N.

    2014-12-01

    We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). For the first time, we present comparisons of our multi-sensor aerosol direct radiative forcing estimates to values derived from a subset of models that participated in the latest AeroCom initiative. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.

  14. Improved blood glucose estimation through multi-sensor fusion.

    PubMed

    Xiong, Feiyu; Hipszer, Brian R; Joseph, Jeffrey; Kam, Moshe

    2011-01-01

    Continuous glucose monitoring systems are an integral component of diabetes management. Efforts to improve the accuracy and robustness of these systems are at the forefront of diabetes research. Towards this goal, a multi-sensor approach was evaluated in hospitalized patients. In this paper, we report on a multi-sensor fusion algorithm to combine glucose sensor measurements in a retrospective fashion. The results demonstrate the algorithm's ability to improve the accuracy and robustness of the blood glucose estimation with current glucose sensor technology.

  15. Reliability measurement during software development. [for a multisensor tracking system

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Sturm, W. A.; Trattner, S.

    1977-01-01

    During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  16. A-Train Aerosol Observations Preliminary Comparisons with AeroCom Models and Pathways to Observationally Based All-Sky Estimates

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Livingston, J.; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; LeBlanc, S.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.; hide

    2014-01-01

    We have developed a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. We compare the spatio-temporal distribution of our multi-sensor aerosol retrievals and calculations of seasonal clear-sky aerosol radiative forcing based on the aerosol retrievals to values derived from four models that participated in the latest AeroCom model intercomparison initiative. We find significant inter-model differences, in particular for the aerosol single scattering albedo, which can be evaluated using the multi-sensor A-Train retrievals. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.

  17. Observability considerations for multi-sensor and product fusion: Bias, information content, and validation (Invited)

    NASA Astrophysics Data System (ADS)

    Reid, J. S.; Zhang, J.; Hyer, E. J.; Campbell, J. R.; Christopher, S. A.; Ferrare, R. A.; Leptoukh, G. G.; Stackhouse, P. W.

    2009-12-01

    With the successful development of many aerosol products from the NASA A-train as well as new operational geostationary and polar orbiting sensors, the scientific community now has a host of new parameters to use in their analyses. The variety and quality of products has reached a point where the community has moved from basic observation-based science to sophisticated multi-component research that addresses the complex atmospheric environment. In order for these satellite data contribute to the science their uncertainty levels must move from semi-quantitative to quantitative. Initial attempts to quantify uncertainties have led to some recent debate in the community as to the efficacy of aerosol products from current and future NASA satellite sensors. In an effort to understand the state of satellite product fidelity, the Naval Research Laboratory and a newly reformed Global Energy and Water Cycle Experiment (GEWEX) aerosol panel have both initiated assessments of the nature of aerosol remote sensing uncertainty and bias. In this talk we go over areas of specific concern based on the authors’ experiences with the data, emphasizing the multi-sensor problem. We first enumerate potential biases, including retrieval, sampling/contextual, and cognitive bias. We show examples of how these biases can subsequently lead to the pitfalls of correlated/compensating errors, tautology, and confounding. The nature of bias is closely related to the information content of the sensor signal and its subsequent application to the derived aerosol quantity of interest (e.g., optical depth, flux, index of refraction, etc.). Consequently, purpose-specific validation methods must be employed, especially when generating multi-sensor products. Indeed, cloud and lower boundary condition biases in particular complicate the more typical methods of regressional bias elimination and histogram matching. We close with a discussion of sequestration of uncertainty in multi-sensor applications of these products in both pair-wise and fused fashions.

  18. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  19. NASA GES DISC Level 2 Aerosol Analysis and Visualization Services

    NASA Technical Reports Server (NTRS)

    Wei, Jennifer; Petrenko, Maksym; Ichoku, Charles; Yang, Wenli; Johnson, James; Zhao, Peisheng; Kempler, Steve

    2015-01-01

    Overview of NASA GES DISC Level 2 aerosol analysis and visualization services: DQViz (Data Quality Visualization)MAPSS (Multi-sensor Aerosol Products Sampling System), and MAPSS_Explorer (Multi-sensor Aerosol Products Sampling System Explorer).

  20. Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation

    PubMed Central

    Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo

    2018-01-01

    Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications. PMID:29534448

  1. Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation.

    PubMed

    Placidi, Giuseppe; Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo

    2018-03-10

    Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications.

  2. Information-based approach to performance estimation and requirements allocation in multisensor fusion for target recognition

    NASA Astrophysics Data System (ADS)

    Harney, Robert C.

    1997-03-01

    A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.

  3. General software design for multisensor data fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Junliang; Zhao, Yuming

    1999-03-01

    In this paper a general method of software design for multisensor data fusion is discussed in detail, which adopts object-oriented technology under UNIX operation system. The software for multisensor data fusion is divided into six functional modules: data collection, database management, GIS, target display and alarming data simulation etc. Furthermore, the primary function, the components and some realization methods of each modular is given. The interfaces among these functional modular relations are discussed. The data exchange among each functional modular is performed by interprocess communication IPC, including message queue, semaphore and shared memory. Thus, each functional modular is executed independently, which reduces the dependence among functional modules and helps software programing and testing. This software for multisensor data fusion is designed as hierarchical structure by the inheritance character of classes. Each functional modular is abstracted and encapsulated through class structure, which avoids software redundancy and enhances readability.

  4. Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency

    PubMed Central

    Abu Bakr, Muhammad; Lee, Sukhan

    2017-01-01

    The paradigm of multisensor data fusion has been evolved from a centralized architecture to a decentralized or distributed architecture along with the advancement in sensor and communication technologies. These days, distributed state estimation and data fusion has been widely explored in diverse fields of engineering and control due to its superior performance over the centralized one in terms of flexibility, robustness to failure and cost effectiveness in infrastructure and communication. However, distributed multisensor data fusion is not without technical challenges to overcome: namely, dealing with cross-correlation and inconsistency among state estimates and sensor data. In this paper, we review the key theories and methodologies of distributed multisensor data fusion available to date with a specific focus on handling unknown correlation and data inconsistency. We aim at providing readers with a unifying view out of individual theories and methodologies by presenting a formal analysis of their implications. Finally, several directions of future research are highlighted. PMID:29077035

  5. Particle Filter-Based Recursive Data Fusion With Sensor Indexing for Large Core Neutron Flux Estimation

    NASA Astrophysics Data System (ADS)

    Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol

    2017-06-01

    We introduce a sequential importance sampling particle filter (PF)-based multisensor multivariate nonlinear estimator for estimating the in-core neutron flux distribution for pressurized heavy water reactor core. Many critical applications such as reactor protection and control rely upon neutron flux information, and thus their reliability is of utmost importance. The point kinetic model based on neutron transport conveniently explains the dynamics of nuclear reactor. The neutron flux in the large core loosely coupled reactor is sensed by multiple sensors measuring point fluxes located at various locations inside the reactor core. The flux values are coupled to each other through diffusion equation. The coupling facilitates redundancy in the information. It is shown that multiple independent data about the localized flux can be fused together to enhance the estimation accuracy to a great extent. We also propose the sensor anomaly handling feature in multisensor PF to maintain the estimation process even when the sensor is faulty or generates data anomaly.

  6. Automatic Construction of Wi-Fi Radio Map Using Smartphones

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Li, Qingquan; Zhang, Xing

    2016-06-01

    Indoor positioning could provide interesting services and applications. As one of the most popular indoor positioning methods, location fingerprinting determines the location of mobile users by matching the received signal strength (RSS) which is location dependent. However, fingerprinting-based indoor positioning requires calibration and updating of the fingerprints which is labor-intensive and time-consuming. In this paper, we propose a visual-based approach for the construction of radio map for anonymous indoor environments without any prior knowledge. This approach collects multi-sensors data, e.g. video, accelerometer, gyroscope, Wi-Fi signals, etc., when people (with smartphones) walks freely in indoor environments. Then, it uses the multi-sensor data to restore the trajectories of people based on an integrated structure from motion (SFM) and image matching method, and finally estimates location of sampling points on the trajectories and construct Wi-Fi radio map. Experiment results show that the average location error of the fingerprints is about 0.53 m.

  7. Multi-sensor millimeter-wave system for hidden objects detection by non-collaborative screening

    NASA Astrophysics Data System (ADS)

    Zouaoui, Rhalem; Czarny, Romain; Diaz, Frédéric; Khy, Antoine; Lamarque, Thierry

    2011-05-01

    In this work, we present the development of a multi-sensor system for the detection of objects concealed under clothes using passive and active millimeter-wave (mmW) technologies. This study concerns both the optimization of a commercial passive mmW imager at 94 GHz using a phase mask and the development of an active mmW detector at 77 GHz based on synthetic aperture radar (SAR). A first wide-field inspection is done by the passive imager while the person is walking. If a suspicious area is detected, the active imager is switched-on and focused on this area in order to obtain more accurate data (shape of the object, nature of the material ...).

  8. Mobile Robotics Activities in DOE Laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ron Lujan; Jerry Harbour; John T. Feddema

    This paper will briefly outline major activities in Department of Energy (DOE) Laboratories focused on mobile platforms, both Unmanned Ground Vehicles (UGV’s) as well as Unmanned Air Vehicles (UAV’s). The activities will be discussed in the context of the science and technology construct used by the DOE Technology Roadmap for Robotics and Intelligent Machines (RIM)1 published in 1998; namely, Perception, Reasoning, Action, and Integration. The activities to be discussed span from research and development to deployment in field operations. The activities support customers in other agencies. The discussion of "perception" will include hyperspectral sensors, complex patterns discrimination, multisensor fusion andmore » advances in LADAR technologies, including real-world perception. "Reasoning" activities to be covered include cooperative controls, distributed systems, ad-hoc networks, platform-centric intelligence, and adaptable communications. The paper will discuss "action" activities such as advanced mobility and various air and ground platforms. In the RIM construct, "integration" includes the Human-Machine Integration. Accordingly the paper will discuss adjustable autonomy and the collaboration of operator(s) with distributed UGV’s and UAV’s. Integration also refers to the applications of these technologies into systems to perform operations such as perimeter surveillance, large-area monitoring and reconnaissance. Unique facilities and test beds for advanced mobile systems will be described. Given that this paper is an overview, rather than delve into specific detail in these activities, other more exhaustive references and sources will be cited extensively.« less

  9. Multi-sensor Navigation System Design

    DOT National Transportation Integrated Search

    1971-03-01

    This report treats the design of naviggation systems that collect data from two or more on-board measurement subsystems and precess this data in an on-board computer. Such systems are called Multi-sensor Navigation Systems. : The design begins with t...

  10. PERKINELMER ELM

    EPA Science Inventory

    The PerkinElmer Elm (formerly the AirBase CanarIT) is a multi-sensor air quality monitoring device that measures particulate matter (PM), total volatile organic compounds (VOCs), nitrogen dioxide (NO2), and several other atmospheric components. PM, VOCs, and NO2

  11. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification.

    PubMed

    Zhou, Tao; Li, Zhaofu; Pan, Jianjun

    2018-01-27

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.

  12. Online tools for uncovering data quality issues in satellite-based global precipitation products

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Heo, G.

    2015-12-01

    Accurate and timely available global precipitation products are important to many applications such as flood forecasting, hydrological modeling, vector-borne disease research, crop yield estimates, etc. However, data quality issues such as biases and uncertainties are common in satellite-based precipitation products and it is important to understand these issues in applications. In recent years, algorithms using multi-satellites and multi-sensors for satellite-based precipitation estimates have become popular, such as the TRMM (Tropical Rainfall Measuring Mission) Multi-satellite Precipitation Analysis (TMPA) and the latest Integrated Multi-satellitE Retrievals for GPM (IMERG). Studies show that data quality issues for multi-satellite and multi-sensor products can vary with space and time and can be difficult to summarize. Online tools can provide customized results for a given area of interest, allowing customized investigation or comparison on several precipitation products. Because downloading data and software is not required, online tools can facilitate precipitation product evaluation and comparison. In this presentation, we will present online tools to uncover data quality issues in satellite-based global precipitation products. Examples will be presented as well.

  13. Computer controlled multisensor thermocouple apparatus for invasive measurement of temperature.

    PubMed

    Hanus, J; Záhora, J; Volenec, K

    1996-01-01

    The computer controlled apparatus for invasive measurement of temperature profile of biological systems based on original miniature multithermocouple probe is described in this article. The main properties of measuring system were verified by using the original testing device.

  14. Regional Drought Monitoring Based on Multi-Sensor Remote Sensing

    NASA Astrophysics Data System (ADS)

    Rhee, Jinyoung; Im, Jungho; Park, Seonyoung

    2014-05-01

    Drought originates from the deficit of precipitation and impacts environment including agriculture and hydrological resources as it persists. The assessment and monitoring of drought has traditionally been performed using a variety of drought indices based on meteorological data, and recently the use of remote sensing data is gaining much attention due to its vast spatial coverage and cost-effectiveness. Drought information has been successfully derived from remotely sensed data related to some biophysical and meteorological variables and drought monitoring is advancing with the development of remote sensing-based indices such as the Vegetation Condition Index (VCI), Vegetation Health Index (VHI), and Normalized Difference Water Index (NDWI) to name a few. The Scaled Drought Condition Index (SDCI) has also been proposed to be used for humid regions proving the performance of multi-sensor data for agricultural drought monitoring. In this study, remote sensing-based hydro-meteorological variables related to drought including precipitation, temperature, evapotranspiration, and soil moisture were examined and the SDCI was improved by providing multiple blends of the multi-sensor indices for different types of drought. Multiple indices were examined together since the coupling and feedback between variables are intertwined and it is not appropriate to investigate only limited variables to monitor each type of drought. The purpose of this study is to verify the significance of each variable to monitor each type of drought and to examine the combination of multi-sensor indices for more accurate and timely drought monitoring. The weights for the blends of multiple indicators were obtained from the importance of variables calculated by non-linear optimization using a Machine Learning technique called Random Forest. The case study was performed in the Republic of Korea, which has four distinct seasons over the course of the year and contains complex topography with a variety of land cover types. Remote sensing data from the Tropical Rainfall Measuring Mission satellite (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), and Advanced Microwave Scanning Radiometer-EOS (AMSR-E) sensors were obtained for the period from 2000 to 2012, and observation data from 99 weather stations, 441 streamflow gauges, as well as the gridded observation data from Asian Precipitation Highly-Resolved Observational Data Integration Towards Evaluation of the Water Resources (APHRODITE) were obtained for validation. The objective blends of multiple indicators helped better assessment of various types of drought, and can be useful for drought early warning system. Since the improved SDCI is based on remotely sensed data, it can be easily applied to regions with limited or no observation data for drought assessment and monitoring.

  15. A system for activity recognition using multi-sensor fusion.

    PubMed

    Gao, Lei; Bourke, Alan K; Nelson, John

    2011-01-01

    This paper proposes a system for activity recognition using multi-sensor fusion. In this system, four sensors are attached to the waist, chest, thigh, and side of the body. In the study we present two solutions for factors that affect the activity recognition accuracy: the calibration drift and the sensor orientation changing. The datasets used to evaluate this system were collected from 8 subjects who were asked to perform 8 scripted normal activities of daily living (ADL), three times each. The Naïve Bayes classifier using multi-sensor fusion is adopted and achieves 70.88%-97.66% recognition accuracies for 1-4 sensors.

  16. Multi-sensor sheets based on large-area electronics for advanced structural health monitoring of civil infrastructure.

    DOT National Transportation Integrated Search

    2014-09-01

    Structural Health Monitoring has a great potential to provide valuable information about the actual structural : condition and can help optimize the management activities. However, few eective and robust monitoring technology exist which hinders a...

  17. Development of a Multi-Sensor Cancer Detection Probe Final Report CRADA No. TC-2026-01

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marion, J.; Hular, R.

    This collaboration continued work started under a previous CRADA (TSB-2023-00) to take a detailed concept specification for a multi-sensor needle/probe suitable for breast cancer analysis and produce a prototype system suitable for human FDA trials.

  18. Development of a multi-sensor based urban discharge forecasting system using remotely sensed data: A case study of extreme rainfall in South Korea

    NASA Astrophysics Data System (ADS)

    Yoon, Sunkwon; Jang, Sangmin; Park, Kyungwon

    2017-04-01

    Extreme weather due to changing climate is a main source of water-related disasters such as flooding and inundation and its damage will be accelerated somewhere in world wide. To prevent the water-related disasters and mitigate their damage in urban areas in future, we developed a multi-sensor based real-time discharge forecasting system using remotely sensed data such as radar and satellite. We used Communication, Ocean and Meteorological Satellite (COMS) and Korea Meteorological Agency (KMA) weather radar for quantitative precipitation estimation. The Automatic Weather System (AWS) and McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation (MAPLE) were used for verification of rainfall accuracy. The optimal Z-R relation was applied the Tropical Z-R relationship (Z=32R1.65), it has been confirmed that the accuracy is improved in the extreme rainfall events. In addition, the performance of blended multi-sensor combining rainfall was improved in 60mm/h rainfall and more strong heavy rainfall events. Moreover, we adjusted to forecast the urban discharge using Storm Water Management Model (SWMM). Several statistical methods have been used for assessment of model simulation between observed and simulated discharge. In terms of the correlation coefficient and r-squared discharge between observed and forecasted were highly correlated. Based on this study, we captured a possibility of real-time urban discharge forecasting system using remotely sensed data and its utilization for real-time flood warning. Acknowledgement This research was supported by a grant (13AWMP-B066744-01) from Advanced Water Management Research Program (AWMP) funded by Ministry of Land, Infrastructure and Transport (MOLIT) of Korean government.

  19. Long-Term Large-Scale Bias-Adjusted Precipitation Estimates at High Spatial and Temporal Resolution Derived from the National Mosaic and Multi-Sensor QPE (NMQ/Q2) Precipitation Reanalysis over CONUS

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Seo, D. J.; Kim, B.

    2014-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over Continental United States (CONUS) is nearly completed for the period covering from 2000 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Rain gauge networks such as the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), the Climate Reference Network (CRN), and the Global Historical Climatology Network - Daily (GHCN-D) are used to adjust for those biases and to merge with the radar only product to provide a multi-sensor estimate. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. After assessing the bias and applying reduction or elimination techniques, we are investigating the kriging method and its variants such as simple kriging (SK), ordinary kriging (OK), and conditional bias-penalized Kriging (CBPK) among others. In addition we hope to generate estimates of uncertainty for the gridded estimate. In this work the methodology is presented as well as a comparison between the radar-only product and the final multi-sensor QPE product. The comparison is performed at various time scales from the sub-hourly, to annual. In addition, comparisons over the same period with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) and satellite products (TMPA, CMORPH, PERSIANN) are provided in order to give a detailed picture of the improvements and remaining challenges.

  20. Multi-Temporal Multi-Sensor Analysis of Urbanization and Environmental/Climate Impact in China for Sustainable Urban Development

    NASA Astrophysics Data System (ADS)

    Ban, Yifang; Gong, Peng; Gamba, Paolo; Taubenbock, Hannes; Du, Peijun

    2016-08-01

    The overall objective of this research is to investigate multi-temporal, multi-scale, multi-sensor satellite data for analysis of urbanization and environmental/climate impact in China to support sustainable planning. Multi- temporal multi-scale SAR and optical data have been evaluated for urban information extraction using innovative methods and algorithms, including KTH- Pavia Urban Extractor, Pavia UEXT, and an "exclusion- inclusion" framework for urban extent extraction, and KTH-SEG, a novel object-based classification method for detailed urban land cover mapping. Various pixel- based and object-based change detection algorithms were also developed to extract urban changes. Several Chinese cities including Beijing, Shanghai and Guangzhou are selected as study areas. Spatio-temporal urbanization patterns and environmental impact at regional, metropolitan and city core were evaluated through ecosystem service, landscape metrics, spatial indices, and/or their combinations. The relationship between land surface temperature and land-cover classes was also analyzed.The urban extraction results showed that urban areas and small towns could be well extracted using multitemporal SAR data with the KTH-Pavia Urban Extractor and UEXT. The fusion of SAR data at multiple scales from multiple sensors was proven to improve urban extraction. For urban land cover mapping, the results show that the fusion of multitemporal SAR and optical data could produce detailed land cover maps with improved accuracy than that of SAR or optical data alone. Pixel-based and object-based change detection algorithms developed with the project were effective to extract urban changes. Comparing the urban land cover results from mulitemporal multisensor data, the environmental impact analysis indicates major losses for food supply, noise reduction, runoff mitigation, waste treatment and global climate regulation services through landscape structural changes in terms of decreases in service area, edge contamination and fragmentation. In terms ofclimate impact, the results indicate that land surface temperature can be related to land use/land cover classes.

  1. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  2. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  3. Multi-Sensor Data Fusion Identification for Shearer Cutting Conditions Based on Parallel Quasi-Newton Neural Networks and the Dempster-Shafer Theory.

    PubMed

    Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Xu, Jing; Zheng, Kehong

    2015-11-13

    In order to efficiently and accurately identify the cutting condition of a shearer, this paper proposed an intelligent multi-sensor data fusion identification method using the parallel quasi-Newton neural network (PQN-NN) and the Dempster-Shafer (DS) theory. The vibration acceleration signals and current signal of six cutting conditions were collected from a self-designed experimental system and some special state features were extracted from the intrinsic mode functions (IMFs) based on the ensemble empirical mode decomposition (EEMD). In the experiment, three classifiers were trained and tested by the selected features of the measured data, and the DS theory was used to combine the identification results of three single classifiers. Furthermore, some comparisons with other methods were carried out. The experimental results indicate that the proposed method performs with higher detection accuracy and credibility than the competing algorithms. Finally, an industrial application example in the fully mechanized coal mining face was demonstrated to specify the effect of the proposed system.

  4. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements.

    PubMed

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-11-15

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.

  5. On the Temporal Stability of Analyte Recognition with an E-Nose Based on a Metal Oxide Sensor Array in Practical Applications.

    PubMed

    Kiselev, Ilia; Sysoev, Victor; Kaikov, Igor; Koronczi, Ilona; Adil Akai Tegin, Ruslan; Smanalieva, Jamila; Sommer, Martin; Ilicali, Coskan; Hauptmannl, Michael

    2018-02-11

    The paper deals with a functional instability of electronic nose (e-nose) units which significantly limits their real-life applications. Here we demonstrate how to approach this issue with example of an e-nose based on a metal oxide sensor array developed at the Karlsruhe Institute of Technology (Germany). We consider the instability of e-nose operation at different time scales ranging from minutes to many years. To test the e-nose we employ open-air and headspace sampling of analyte odors. The multivariate recognition algorithm to process the multisensor array signals is based on the linear discriminant analysis method. Accounting for the received results, we argue that the stability of device operation is mostly affected by accidental changes in the ambient air composition. To overcome instabilities, we introduce the add-training procedure which is found to successfully manage both the temporal changes of ambient and the drift of multisensor array properties, even long-term. The method can be easily implemented in practical applications of e-noses and improve prospects for device marketing.

  6. A novel multisensor traffic state assessment system based on incomplete data.

    PubMed

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Jiang, Yaoliang

    2014-01-01

    A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system.

  7. A Novel Multisensor Traffic State Assessment System Based on Incomplete Data

    PubMed Central

    Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Jiang, Yaoliang

    2014-01-01

    A novel multisensor system with incomplete data is presented for traffic state assessment. The system comprises probe vehicle detection sensors, fixed detection sensors, and traffic state assessment algorithm. First of all, the validity checking of the traffic flow data is taken as preprocessing of this method. And then a new method based on the history data information is proposed to fuse and recover the incomplete data. According to the characteristics of space complementary of data based on the probe vehicle detector and fixed detector, a fusion model of space matching is presented to estimate the mean travel speed of the road. Finally, the traffic flow data include flow, speed and, occupancy rate, which are detected between Beijing Deshengmen bridge and Drum Tower bridge, are fused to assess the traffic state of the road by using the fusion decision model of rough sets and cloud. The accuracy of experiment result can reach more than 98%, and the result is in accordance with the actual road traffic state. This system is effective to assess traffic state, and it is suitable for the urban intelligent transportation system. PMID:25162055

  8. Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient.

    PubMed

    Shi, Fengjian; Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua

    2017-10-16

    In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster-Shafer evidence theory (D-S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D-S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method.

  9. On the Temporal Stability of Analyte Recognition with an E-Nose Based on a Metal Oxide Sensor Array in Practical Applications

    PubMed Central

    Kaikov, Igor; Koronczi, Ilona; Adil Akai Tegin, Ruslan; Smanalieva, Jamila; Sommer, Martin; Ilicali, Coskan; Hauptmannl, Michael

    2018-01-01

    The paper deals with a functional instability of electronic nose (e-nose) units which significantly limits their real-life applications. Here we demonstrate how to approach this issue with example of an e-nose based on a metal oxide sensor array developed at the Karlsruhe Institute of Technology (Germany). We consider the instability of e-nose operation at different time scales ranging from minutes to many years. To test the e-nose we employ open-air and headspace sampling of analyte odors. The multivariate recognition algorithm to process the multisensor array signals is based on the linear discriminant analysis method. Accounting for the received results, we argue that the stability of device operation is mostly affected by accidental changes in the ambient air composition. To overcome instabilities, we introduce the add-training procedure which is found to successfully manage both the temporal changes of ambient and the drift of multisensor array properties, even long-term. The method can be easily implemented in practical applications of e-noses and improve prospects for device marketing. PMID:29439468

  10. A method based on multi-sensor data fusion for fault detection of planetary gearboxes.

    PubMed

    Lei, Yaguo; Lin, Jing; He, Zhengjia; Kong, Detong

    2012-01-01

    Studies on fault detection and diagnosis of planetary gearboxes are quite limited compared with those of fixed-axis gearboxes. Different from fixed-axis gearboxes, planetary gearboxes exhibit unique behaviors, which invalidate fault diagnosis methods that work well for fixed-axis gearboxes. It is a fact that for systems as complex as planetary gearboxes, multiple sensors mounted on different locations provide complementary information on the health condition of the systems. On this basis, a fault detection method based on multi-sensor data fusion is introduced in this paper. In this method, two features developed for planetary gearboxes are used to characterize the gear health conditions, and an adaptive neuro-fuzzy inference system (ANFIS) is utilized to fuse all features from different sensors. In order to demonstrate the effectiveness of the proposed method, experiments are carried out on a planetary gearbox test rig, on which multiple accelerometers are mounted for data collection. The comparisons between the proposed method and the methods based on individual sensors show that the former achieves much higher accuracies in detecting planetary gearbox faults.

  11. Extended Kalman Doppler tracking and model determination for multi-sensor short-range radar

    NASA Astrophysics Data System (ADS)

    Mittermaier, Thomas J.; Siart, Uwe; Eibert, Thomas F.; Bonerz, Stefan

    2016-09-01

    A tracking solution for collision avoidance in industrial machine tools based on short-range millimeter-wave radar Doppler observations is presented. At the core of the tracking algorithm there is an Extended Kalman Filter (EKF) that provides dynamic estimation and localization in real-time. The underlying sensor platform consists of several homodyne continuous wave (CW) radar modules. Based on In-phase-Quadrature (IQ) processing and down-conversion, they provide only Doppler shift information about the observed target. Localization with Doppler shift estimates is a nonlinear problem that needs to be linearized before the linear KF can be applied. The accuracy of state estimation depends highly on the introduced linearization errors, the initialization and the models that represent the true physics as well as the stochastic properties. The important issue of filter consistency is addressed and an initialization procedure based on data fitting and maximum likelihood estimation is suggested. Models for both, measurement and process noise are developed. Tracking results from typical three-dimensional courses of movement at short distances in front of a multi-sensor radar platform are presented.

  12. Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient

    PubMed Central

    Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua

    2017-01-01

    In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster–Shafer evidence theory (D–S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D–S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method. PMID:29035341

  13. The role of multisensor data fusion in neuromuscular control of a sagittal arm with a pair of muscles using actor-critic reinforcement learning method.

    PubMed

    Golkhou, V; Parnianpour, M; Lucas, C

    2004-01-01

    In this study, we consider the role of multisensor data fusion in neuromuscular control using an actor-critic reinforcement learning method. The model we use is a single link system actuated by a pair of muscles that are excited with alpha and gamma signals. Various physiological sensor information such as proprioception, spindle sensors, and Golgi tendon organs have been integrated to achieve an oscillatory movement with variable amplitude and frequency, while achieving a stable movement with minimum metabolic cost and coactivation. The system is highly nonlinear in all its physical and physiological attributes. Transmission delays are included in the afferent and efferent neural paths to account for a more accurate representation of the reflex loops. This paper proposes a reinforcement learning method with an Actor-Critic architecture instead of middle and low level of central nervous system (CNS). The Actor in this structure is a two layer feedforward neural network and the Critic is a model of the cerebellum. The Critic is trained by the State-Action-Reward-State-Action (SARSA) method. The Critic will train the Actor by supervisory learning based on previous experiences. The reinforcement signal in SARSA is evaluated based on available alternatives concerning the concept of multisensor data fusion. The effectiveness and the biological plausibility of the present model are demonstrated by several simulations. The system showed excellent tracking capability when we integrated the available sensor information. Addition of a penalty for activation of muscles resulted in much lower muscle coactivation while keeping the movement stable.

  14. Middle-term Metropolitan Water Availability Index Assessment Based on Synergistic Potentials of Multi-sensor Data

    EPA Science Inventory

    The impact of recent drought and water pollution episodes results in an acute need to project future water availability to assist water managers in water utility infrastructure management within many metropolitan regions. Separate drought and water quality indices previously deve...

  15. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association

    PubMed Central

    Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You

    2017-01-01

    This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets’ state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems. PMID:29113085

  16. Centralized Multi-Sensor Square Root Cubature Joint Probabilistic Data Association.

    PubMed

    Liu, Yu; Liu, Jun; Li, Gang; Qi, Lin; Li, Yaowen; He, You

    2017-11-05

    This paper focuses on the tracking problem of multiple targets with multiple sensors in a nonlinear cluttered environment. To avoid Jacobian matrix computation and scaling parameter adjustment, improve numerical stability, and acquire more accurate estimated results for centralized nonlinear tracking, a novel centralized multi-sensor square root cubature joint probabilistic data association algorithm (CMSCJPDA) is proposed. Firstly, the multi-sensor tracking problem is decomposed into several single-sensor multi-target tracking problems, which are sequentially processed during the estimation. Then, in each sensor, the assignment of its measurements to target tracks is accomplished on the basis of joint probabilistic data association (JPDA), and a weighted probability fusion method with square root version of a cubature Kalman filter (SRCKF) is utilized to estimate the targets' state. With the measurements in all sensors processed CMSCJPDA is derived and the global estimated state is achieved. Experimental results show that CMSCJPDA is superior to the state-of-the-art algorithms in the aspects of tracking accuracy, numerical stability, and computational cost, which provides a new idea to solve multi-sensor tracking problems.

  17. A multi-sensor scenario for coastal surveillance

    NASA Astrophysics Data System (ADS)

    van den Broek, A. C.; van den Broek, S. P.; van den Heuvel, J. C.; Schwering, P. B. W.; van Heijningen, A. W. P.

    2007-10-01

    Maritime borders and coastal zones are susceptible to threats such as drug trafficking, piracy, undermining economical activities. At TNO Defence, Security and Safety various studies aim at improving situational awareness in a coastal zone. In this study we focus on multi-sensor surveillance of the coastal environment. We present a study on improving classification results for small sea surface targets using an advanced sensor suite and a scenario in which a small boat is approaching the coast. A next generation sensor suite mounted on a tower has been defined consisting of a maritime surveillance and tracking radar system, capable of producing range profiles and ISAR imagery of ships, an advanced infrared camera and a laser range profiler. For this suite we have developed a multi-sensor classification procedure, which is used to evaluate the capabilities for recognizing and identifying non-cooperative ships in coastal waters. We have found that the different sensors give complementary information. Each sensor has its own specific distance range in which it contributes most. A multi-sensor approach reduces the number of misclassifications and reliable classification results are obtained earlier compared to a single sensor approach.

  18. An Embodied Multi-Sensor Fusion Approach to Visual Motion Estimation Using Unsupervised Deep Networks.

    PubMed

    Shamwell, E Jared; Nothwang, William D; Perlis, Donald

    2018-05-04

    Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.

  19. Experiential Learning of Robotics Fundamentals Based on a Case Study of Robot-Assisted Stereotactic Neurosurgery

    ERIC Educational Resources Information Center

    Faria, Carlos; Vale, Carolina; Machado, Toni; Erlhagen, Wolfram; Rito, Manuel; Monteiro, Sérgio; Bicho, Estela

    2016-01-01

    Robotics has been playing an important role in modern surgery, especially in procedures that require extreme precision, such as neurosurgery. This paper addresses the challenge of teaching robotics to undergraduate engineering students, through an experiential learning project of robotics fundamentals based on a case study of robot-assisted…

  20. Older adults' acceptance of a robot for partner dance-based exercise.

    PubMed

    Chen, Tiffany L; Bhattacharjee, Tapomayukh; Beer, Jenay M; Ting, Lena H; Hackney, Madeleine E; Rogers, Wendy A; Kemp, Charles C

    2017-01-01

    Partner dance has been shown to be beneficial for the health of older adults. Robots could potentially facilitate healthy aging by engaging older adults in partner dance-based exercise. However, partner dance involves physical contact between the dancers, and older adults would need to be accepting of partner dancing with a robot. Using methods from the technology acceptance literature, we conducted a study with 16 healthy older adults to investigate their acceptance of robots for partner dance-based exercise. Participants successfully led a human-scale wheeled robot with arms (i.e., a mobile manipulator) in a simple, which we refer to as the Partnered Stepping Task (PST). Participants led the robot by maintaining physical contact and applying forces to the robot's end effectors. According to questionnaires, participants were generally accepting of the robot for partner dance-based exercise, tending to perceive it as useful, easy to use, and enjoyable. Participants tended to perceive the robot as easier to use after performing the PST with it. Through a qualitative data analysis of structured interview data, we also identified facilitators and barriers to acceptance of robots for partner dance-based exercise. Throughout the study, our robot used admittance control to successfully dance with older adults, demonstrating the feasibility of this method. Overall, our results suggest that robots could successfully engage older adults in partner dance-based exercise.

  1. Information Measures for Multisensor Systems

    DTIC Science & Technology

    2013-12-11

    permuted to generate spectra that were non- physical but preserved the entropy of the source spectra. Another 1000 spectra were constructed to mimic co...Research Laboratory (NRL) has yielded probabilistic models for spectral data that enable the computation of information measures such as entropy and...22308 Chemical sensing Information theory Spectral data Information entropy Information divergence Mass spectrometry Infrared spectroscopy Multisensor

  2. Multi-Sensor Scene Synthesis and Analysis

    DTIC Science & Technology

    1981-09-01

    Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database

  3. SenseCube--A Novel Inexpensive Wireless Multisensor for Physics Lab Experimentations

    ERIC Educational Resources Information Center

    Mehta, Vedant; Lane, Charles D.

    2018-01-01

    SenseCube is a multisensor capable of measuring many different real-time events and changes in environment. Most conventional sensors used in introductory-physics labs use their own software and have wires that must be attached to a computer or an alternate device to analyze the data. This makes the standard sensors time consuming, tedious, and…

  4. Perception for mobile robot navigation: A survey of the state of the art

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David

    1994-01-01

    In order for mobile robots to navigate safely in unmapped and dynamic environments they must perceive their environment and decide on actions based on those perceptions. There are many different sensing modalities that can be used for mobile robot perception; the two most popular are ultrasonic sonar sensors and vision sensors. This paper examines the state-of-the-art in sensory-based mobile robot navigation. The first issue in mobile robot navigation is safety. This paper summarizes several competing sonar-based obstacle avoidance techniques and compares them. Another issue in mobile robot navigation is determining the robot's position and orientation (sometimes called the robot's pose) in the environment. This paper examines several different classes of vision-based approaches to pose determination. One class of approaches uses detailed, a prior models of the robot's environment. Another class of approaches triangulates using fixed, artificial landmarks. A third class of approaches builds maps using natural landmarks. Example implementations from each of these three classes are described and compared. Finally, the paper presents a completely implemented mobile robot system that integrates sonar-based obstacle avoidance with vision-based pose determination to perform a simple task.

  5. Multianalyte imaging in one-shot format sensors for natural waters.

    PubMed

    Lapresta-Fernández, A; Huertas, Rafael; Melgosa, Manuel; Capitán-Vallvey, L F

    2009-03-23

    A one-shot multisensor based on ionophore-chromoionophore chemistry for optical monitoring of potassium, magnesium and hardness in water is presented. The analytical procedure uses a black and white non-cooled CCD camera for image acquisition of the one-shot multisensor after reaction, followed by data treatment for quantitation using the grey value pixel average from a defined region of interest from each sensing area to build the analytical parameter 1-alpha. In optimised experimental conditions, the procedure shows a large linear range, up to 6 orders using the linearised model and good detection limits: 9.92 x 10(-5)mM, 1.86 x 10(-3)mM and 1.30 x 10(-2)mgL(-1) of CaCO(3) for potassium, magnesium and hardness, respectively. This analysis system exhibits good precision in terms of relative standard deviation (RSD%) from 2.3 to 3.8 for potassium, from 5.0 to 6.8 for magnesium and from 5.4 to 5.9 for hardness. The trueness of this multisensor procedure was demonstrated comparing it with results obtained by a DAD spectrophotometer used as a reference. Finally, it was satisfactorily applied to the analysis of these analytes in miscellaneous samples, such as water and beverage samples from different origins, validating the results against atomic absorption spectrometry (AAS) as the reference procedure.

  6. Instrumental intelligent test of food sensory quality as mimic of human panel test combining multiple cross-perception sensors and data fusion.

    PubMed

    Ouyang, Qin; Zhao, Jiewen; Chen, Quansheng

    2014-09-02

    Instrumental test of food quality using perception sensors instead of human panel test is attracting massive attention recently. A novel cross-perception multi-sensors data fusion imitating multiple mammal perception was proposed for the instrumental test in this work. First, three mimic sensors of electronic eye, electronic nose and electronic tongue were used in sequence for data acquisition of rice wine samples. Then all data from the three different sensors were preprocessed and merged. Next, three cross-perception variables i.e., color, aroma and taste, were constructed using principal components analysis (PCA) and multiple linear regression (MLR) which were used as the input of models. MLR, back-propagation artificial neural network (BPANN) and support vector machine (SVM) were comparatively used for modeling, and the instrumental test was achieved for the comprehensive quality of samples. Results showed the proposed cross-perception multi-sensors data fusion presented obvious superiority to the traditional data fusion methodologies, also achieved a high correlation coefficient (>90%) with the human panel test results. This work demonstrated that the instrumental test based on the cross-perception multi-sensors data fusion can actually mimic the human test behavior, therefore is of great significance to ensure the quality of products and decrease the loss of the manufacturers. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.

    PubMed

    Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning

    2018-03-16

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.

  8. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification

    PubMed Central

    Pan, Jianjun

    2018-01-01

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively. PMID:29382073

  9. Designing speech-based interfaces for telepresence robots for people with disabilities.

    PubMed

    Tsui, Katherine M; Flynn, Kelsey; McHugh, Amelia; Yanco, Holly A; Kontak, David

    2013-06-01

    People with cognitive and/or motor impairments may benefit from using telepresence robots to engage in social activities. To date, these robots, their user interfaces, and their navigation behaviors have not been designed for operation by people with disabilities. We conducted an experiment in which participants (n=12) used a telepresence robot in a scavenger hunt task to determine how they would use speech to command the robot. Based upon the results, we present design guidelines for speech-based interfaces for telepresence robots.

  10. Smart-Pixel Array Processors Based on Optimal Cellular Neural Networks for Space Sensor Applications

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Sheu, Bing J.; Venus, Holger; Sandau, Rainer

    1997-01-01

    A smart-pixel cellular neural network (CNN) with hardware annealing capability, digitally programmable synaptic weights, and multisensor parallel interface has been under development for advanced space sensor applications. The smart-pixel CNN architecture is a programmable multi-dimensional array of optoelectronic neurons which are locally connected with their local neurons and associated active-pixel sensors. Integration of the neuroprocessor in each processor node of a scalable multiprocessor system offers orders-of-magnitude computing performance enhancements for on-board real-time intelligent multisensor processing and control tasks of advanced small satellites. The smart-pixel CNN operation theory, architecture, design and implementation, and system applications are investigated in detail. The VLSI (Very Large Scale Integration) implementation feasibility was illustrated by a prototype smart-pixel 5x5 neuroprocessor array chip of active dimensions 1380 micron x 746 micron in a 2-micron CMOS technology.

  11. Older adults’ acceptance of a robot for partner dance-based exercise

    PubMed Central

    Chen, Tiffany L.; Beer, Jenay M.; Ting, Lena H.; Hackney, Madeleine E.; Rogers, Wendy A.; Kemp, Charles C.

    2017-01-01

    Partner dance has been shown to be beneficial for the health of older adults. Robots could potentially facilitate healthy aging by engaging older adults in partner dance-based exercise. However, partner dance involves physical contact between the dancers, and older adults would need to be accepting of partner dancing with a robot. Using methods from the technology acceptance literature, we conducted a study with 16 healthy older adults to investigate their acceptance of robots for partner dance-based exercise. Participants successfully led a human-scale wheeled robot with arms (i.e., a mobile manipulator) in a simple, which we refer to as the Partnered Stepping Task (PST). Participants led the robot by maintaining physical contact and applying forces to the robot’s end effectors. According to questionnaires, participants were generally accepting of the robot for partner dance-based exercise, tending to perceive it as useful, easy to use, and enjoyable. Participants tended to perceive the robot as easier to use after performing the PST with it. Through a qualitative data analysis of structured interview data, we also identified facilitators and barriers to acceptance of robots for partner dance-based exercise. Throughout the study, our robot used admittance control to successfully dance with older adults, demonstrating the feasibility of this method. Overall, our results suggest that robots could successfully engage older adults in partner dance-based exercise. PMID:29045408

  12. Feasibility of Synergy-Based Exoskeleton Robot Control in Hemiplegia.

    PubMed

    Hassan, Modar; Kadone, Hideki; Ueno, Tomoyuki; Hada, Yasushi; Sankai, Yoshiyuki; Suzuki, Kenji

    2018-06-01

    Here, we present a study on exoskeleton robot control based on inter-limb locomotor synergies using a robot control method developed to target hemiparesis. The robot control is based on inter-limb locomotor synergies and kinesiological information from the non-paretic leg and a walking aid cane to generate motion patterns for the assisted leg. The developed synergy-based system was tested against an autonomous robot control system in five patients with hemiparesis and varying locomotor abilities. Three of the participants were able to walk using the robot. Results from these participants showed an improved spatial symmetry ratio and more consistent step length with the synergy-based method compared with that for the autonomous method, while the increase in the range of motion for the assisted joints was larger with the autonomous system. The kinematic synergy distribution of the participants walking without the robot suggests a relationship between each participant's synergy distribution and his/her ability to control the robot: participants with two independent synergies accounting for approximately 80% of the data variability were able to walk with the robot. This observation was not consistently apparent with conventional clinical measures such as the Brunnstrom stages. This paper contributes to the field of robot-assisted locomotion therapy by introducing the concept of inter-limb synergies, demonstrating performance differences between synergy-based and autonomous robot control, and investigating the range of disability in which the system is usable.

  13. The Robotic Decathlon: Project-Based Learning Labs and Curriculum Design for an Introductory Robotics Course

    ERIC Educational Resources Information Center

    Cappelleri, D. J.; Vitoroulis, N.

    2013-01-01

    This paper presents a series of novel project-based learning labs for an introductory robotics course that are developed into a semester-long Robotic Decathlon. The last three events of the Robotic Decathlon are used as three final one-week-long project tasks; these replace a previous course project that was a semester-long robotics competition.…

  14. Realtime motion planning for a mobile robot in an unknown environment using a neurofuzzy based approach

    NASA Astrophysics Data System (ADS)

    Zheng, Taixiong

    2005-12-01

    A neuro-fuzzy network based approach for robot motion in an unknown environment was proposed. In order to control the robot motion in an unknown environment, the behavior of the robot was classified into moving to the goal and avoiding obstacles. Then, according to the dynamics of the robot and the behavior character of the robot in an unknown environment, fuzzy control rules were introduced to control the robot motion. At last, a 6-layer neuro-fuzzy network was designed to merge from what the robot sensed to robot motion control. After being trained, the network may be used for robot motion control. Simulation results show that the proposed approach is effective for robot motion control in unknown environment.

  15. The use of automation and robotic systems to establish and maintain lunar base operations

    NASA Technical Reports Server (NTRS)

    Petrosky, Lyman J.

    1992-01-01

    Robotic systems provide a means of performing many of the operations required to establish and maintain a lunar base. They form a synergistic system when properly used in concert with human activities. This paper discusses the various areas where robotics and automation may be used to enhance lunar base operations. Robots are particularly well suited for surface operations (exterior to the base habitat modules) because they can be designed to operate in the extreme temperatures and vacuum conditions of the Moon (or Mars). In this environment, the capabilities of semi-autonomous robots would surpass that of humans in all but the most complex tasks. Robotic surface operations include such activities as long range geological and mineralogical surveys with sample return, materials movement in and around the base, construction of radiation barriers around habitats, transfer of materials over large distances, and construction of outposts. Most of the above operations could be performed with minor modifications to a single basic robotic rover. Within the lunar base habitats there are a few areas where robotic operations would be preferable to human operations. Such areas include routine inspections for leakage in the habitat and its systems, underground transfer of materials between habitats, and replacement of consumables. In these and many other activities, robotic systems will greatly enhance lunar base operations. The robotic systems described in this paper are based on what is realistically achievable with relatively near term technology. A lunar base can be built and maintained if we are willing.

  16. Diffusion of robotics into clinical practice in the United States: process, patient safety, learning curves, and the public health.

    PubMed

    Mirheydar, Hossein S; Parsons, J Kellogg

    2013-06-01

    Robotic technology disseminated into urological practice without robust comparative effectiveness data. To review the diffusion of robotic surgery into urological practice. We performed a comprehensive literature review focusing on diffusion patterns, patient safety, learning curves, and comparative costs for robotic radical prostatectomy, partial nephrectomy, and radical cystectomy. Robotic urologic surgery diffused in patterns typical of novel technology spreading among practicing surgeons. Robust evidence-based data comparing outcomes of robotic to open surgery were sparse. Although initial Level 3 evidence for robotic prostatectomy observed complication outcomes similar to open prostatectomy, subsequent population-based Level 2 evidence noted an increased prevalence of adverse patient safety events and genitourinary complications among robotic patients during the early years of diffusion. Level 2 evidence indicated comparable to improved patient safety outcomes for robotic compared to open partial nephrectomy and cystectomy. Learning curve recommendations for robotic urologic surgery have drawn exclusively on Level 4 evidence and subjective, non-validated metrics. The minimum number of cases required to achieve competency for robotic prostatectomy has increased to unrealistically high levels. Most comparative cost-analyses have demonstrated that robotic surgery is significantly more expensive than open or laparoscopic surgery. Evidence-based data are limited but suggest an increased prevalence of adverse patient safety events for robotic prostatectomy early in the national diffusion period. Learning curves for robotic urologic surgery are subjective and based on non-validated metrics. The urological community should develop rigorous, evidence-based processes by which future technological innovations may diffuse in an organized and safe manner.

  17. Time-Of-Flight Camera, Optical Tracker and Computed Tomography in Pairwise Data Registration.

    PubMed

    Pycinski, Bartlomiej; Czajkowska, Joanna; Badura, Pawel; Juszczyk, Jan; Pietka, Ewa

    2016-01-01

    A growing number of medical applications, including minimal invasive surgery, depends on multi-modal or multi-sensors data processing. Fast and accurate 3D scene analysis, comprising data registration, seems to be crucial for the development of computer aided diagnosis and therapy. The advancement of surface tracking system based on optical trackers already plays an important role in surgical procedures planning. However, new modalities, like the time-of-flight (ToF) sensors, widely explored in non-medical fields are powerful and have the potential to become a part of computer aided surgery set-up. Connection of different acquisition systems promises to provide a valuable support for operating room procedures. Therefore, the detailed analysis of the accuracy of such multi-sensors positioning systems is needed. We present the system combining pre-operative CT series with intra-operative ToF-sensor and optical tracker point clouds. The methodology contains: optical sensor set-up and the ToF-camera calibration procedures, data pre-processing algorithms, and registration technique. The data pre-processing yields a surface, in case of CT, and point clouds for ToF-sensor and marker-driven optical tracker representation of an object of interest. An applied registration technique is based on Iterative Closest Point algorithm. The experiments validate the registration of each pair of modalities/sensors involving phantoms of four various human organs in terms of Hausdorff distance and mean absolute distance metrics. The best surface alignment was obtained for CT and optical tracker combination, whereas the worst for experiments involving ToF-camera. The obtained accuracies encourage to further develop the multi-sensors systems. The presented substantive discussion concerning the system limitations and possible improvements mainly related to the depth information produced by the ToF-sensor is useful for computer aided surgery developers.

  18. Geocoding and stereo display of tropical forest multisensor datasets

    NASA Technical Reports Server (NTRS)

    Welch, R.; Jordan, T. R.; Luvall, J. C.

    1990-01-01

    Concern about the future of tropical forests has led to a demand for geocoded multisensor databases that can be used to assess forest structure, deforestation, thermal response, evapotranspiration, and other parameters linked to climate change. In response to studies being conducted at the Braulino Carrillo National Park, Costa Rica, digital satellite and aircraft images recorded by Landsat TM, SPOT HRV, Thermal Infrared Multispectral Scanner, and Calibrated Airborne Multispectral Scanner sensors were placed in register using the Landsat TM image as the reference map. Despite problems caused by relief, multitemporal datasets, and geometric distortions in the aircraft images, registration was accomplished to within + or - 20 m (+ or - 1 data pixel). A digital elevation model constructed from a multisensor Landsat TM/SPOT stereopair proved useful for generating perspective views of the rugged, forested terrain.

  19. Towards operational multisensor registration

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.

    1991-01-01

    To use data from a number of different remote sensors in a synergistic manner, a multidimensional analysis of the data is necessary. However, prior to this analysis, processing to correct for the systematic geometric distortion characteristic of each sensor is required. Furthermore, the registration process must be fully automated to handle a large volume of data and high data rates. A conceptual approach towards an operational multisensor registration algorithm is presented. The performance requirements of the algorithm are first formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several registration techniques that fit within the structure of this algorithm are also presented. Their performance was evaluated using a multisensor test data set assembled from LANDSAT TM, SEASAT, SIR-B, Thermal Infrared Multispectral Scanner (TIMS), and SPOT sensors.

  20. Laser-based pedestrian tracking in outdoor environments by multiple mobile robots.

    PubMed

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-10-29

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures.

  1. Understanding of Android-Based Robotic and Game Structure

    NASA Astrophysics Data System (ADS)

    Phongtraychack, A.; Syryamkin, V.

    2018-05-01

    The development of an android with impressive lifelike appearance and behavior has been a long-standing goal in robotics and a new and exciting approach of smartphone-based robotics for research and education. Recent years have been progressive for many technologies, which allowed creating such androids. There are different examples including the autonomous Erica android system capable of conversational interaction and speech synthesis technologies. The behavior of Android-based robot could be running on the phone as the robot performed a task outdoors. In this paper, we present an overview and understanding of the platform of Android-based robotic and game structure for research and education.

  2. Launchable and Retrievable Tetherobot

    NASA Technical Reports Server (NTRS)

    Younse, Paulo; Aghazarian, Hrand

    2010-01-01

    A proposed robotic system for scientific exploration of rough terrain would include a stationary or infrequently moving larger base robot, to which would be tethered a smaller hopping robot of the type described in the immediately preceding article. The two-robot design would extend the reach of the base robot, making it possible to explore nearby locations that might otherwise be inaccessible or too hazardous for the base robot. The system would include a launching mechanism and a motor-driven reel on the larger robot. The outer end of the tether would be attached to the smaller robot; the inner end of the tether would be attached to the reel. The figure depicts the launching and retrieval process. The launching mechanism would aim and throw the smaller robot toward a target location, and the tether would be paid out from the reel as the hopping robot flew toward the target. Upon completion of exploratory activity at the target location, the smaller robot would be made to hop and, in a coordinated motion, the tether would be wound onto the reel to pull the smaller robot back to the larger one.

  3. Event-based estimation of water budget components using the network of multi-sensor capacitance probes

    USDA-ARS?s Scientific Manuscript database

    A time-scale-free approach was developed for estimation of water fluxes at boundaries of monitoring soil profile using water content time series. The approach uses the soil water budget to compute soil water budget components, i.e. surface-water excess (Sw), infiltration less evapotranspiration (I-E...

  4. Multi-Sensor Data Fusion Identification for Shearer Cutting Conditions Based on Parallel Quasi-Newton Neural Networks and the Dempster-Shafer Theory

    PubMed Central

    Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Xu, Jing; Zheng, Kehong

    2015-01-01

    In order to efficiently and accurately identify the cutting condition of a shearer, this paper proposed an intelligent multi-sensor data fusion identification method using the parallel quasi-Newton neural network (PQN-NN) and the Dempster-Shafer (DS) theory. The vibration acceleration signals and current signal of six cutting conditions were collected from a self-designed experimental system and some special state features were extracted from the intrinsic mode functions (IMFs) based on the ensemble empirical mode decomposition (EEMD). In the experiment, three classifiers were trained and tested by the selected features of the measured data, and the DS theory was used to combine the identification results of three single classifiers. Furthermore, some comparisons with other methods were carried out. The experimental results indicate that the proposed method performs with higher detection accuracy and credibility than the competing algorithms. Finally, an industrial application example in the fully mechanized coal mining face was demonstrated to specify the effect of the proposed system. PMID:26580620

  5. Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network

    PubMed Central

    Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan

    2014-01-01

    Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667

  6. A Locomotion Intent Prediction System Based on Multi-Sensor Fusion

    PubMed Central

    Chen, Baojun; Zheng, Enhao; Wang, Qining

    2014-01-01

    Locomotion intent prediction is essential for the control of powered lower-limb prostheses to realize smooth locomotion transitions. In this research, we develop a multi-sensor fusion based locomotion intent prediction system, which can recognize current locomotion mode and detect locomotion transitions in advance. Seven able-bodied subjects were recruited for this research. Signals from two foot pressure insoles and three inertial measurement units (one on the thigh, one on the shank and the other on the foot) are measured. A two-level recognition strategy is used for the recognition with linear discriminate classifier. Six kinds of locomotion modes and ten kinds of locomotion transitions are tested in this study. Recognition accuracy during steady locomotion periods (i.e., no locomotion transitions) is 99.71% ± 0.05% for seven able-bodied subjects. During locomotion transition periods, all the transitions are correctly detected and most of them can be detected before transiting to new locomotion modes. No significant deterioration in recognition performance is observed in the following five hours after the system is trained, and small number of experiment trials are required to train reliable classifiers. PMID:25014097

  7. A locomotion intent prediction system based on multi-sensor fusion.

    PubMed

    Chen, Baojun; Zheng, Enhao; Wang, Qining

    2014-07-10

    Locomotion intent prediction is essential for the control of powered lower-limb prostheses to realize smooth locomotion transitions. In this research, we develop a multi-sensor fusion based locomotion intent prediction system, which can recognize current locomotion mode and detect locomotion transitions in advance. Seven able-bodied subjects were recruited for this research. Signals from two foot pressure insoles and three inertial measurement units (one on the thigh, one on the shank and the other on the foot) are measured. A two-level recognition strategy is used for the recognition with linear discriminate classifier. Six kinds of locomotion modes and ten kinds of locomotion transitions are tested in this study. Recognition accuracy during steady locomotion periods (i.e., no locomotion transitions) is 99.71% ± 0.05% for seven able-bodied subjects. During locomotion transition periods, all the transitions are correctly detected and most of them can be detected before transiting to new locomotion modes. No significant deterioration in recognition performance is observed in the following five hours after the system is trained, and small number of experiment trials are required to train reliable classifiers.

  8. Cooperative intelligent robotics in space III; Proceedings of the Meeting, Boston, MA, Nov. 16-18, 1992

    NASA Technical Reports Server (NTRS)

    Erickson, Jon D. (Editor)

    1992-01-01

    The present volume on cooperative intelligent robotics in space discusses sensing and perception, Space Station Freedom robotics, cooperative human/intelligent robot teams, and intelligent space robotics. Attention is given to space robotics reasoning and control, ground-based space applications, intelligent space robotics architectures, free-flying orbital space robotics, and cooperative intelligent robotics in space exploration. Topics addressed include proportional proximity sensing for telerobots using coherent lasar radar, ground operation of the mobile servicing system on Space Station Freedom, teleprogramming a cooperative space robotic workcell for space stations, and knowledge-based task planning for the special-purpose dextrous manipulator. Also discussed are dimensions of complexity in learning from interactive instruction, an overview of the dynamic predictive architecture for robotic assistants, recent developments at the Goddard engineering testbed, and parallel fault-tolerant robot control.

  9. Unmanned Aerial System (UAS)-based phenotyping of soybean using multi-sensor data fusion and extreme learning machine

    NASA Astrophysics Data System (ADS)

    Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix

    2017-12-01

    Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was obtained by fusion of information from all three sensors with an RMSE of 11.6%. (2) Among the plant biophysical variables, LAI was best predicted by RGB and thermal data fusion while multispectral and thermal data fusion was found to be best for biomass estimation. (3) For estimation of the above mentioned plant traits of soybean from multi-sensor data fusion, ELR yields promising results compared to PLSR and SVR in this study. This research indicates that fusion of low-cost multiple sensor data within a machine learning framework can provide relatively accurate estimation of plant traits and provide valuable insight for high spatial precision in agriculture and plant stress assessment.

  10. Virtual Reality Based Support System for Layout Planning and Programming of an Industrial Robotic Work Cell

    PubMed Central

    Yap, Hwa Jen; Taha, Zahari; Md Dawal, Siti Zawiah; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell. PMID:25360663

  11. Virtual reality based support system for layout planning and programming of an industrial robotic work cell.

    PubMed

    Yap, Hwa Jen; Taha, Zahari; Dawal, Siti Zawiah Md; Chang, Siow-Wee

    2014-01-01

    Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell.

  12. Concept and design philosophy of a person-accompanying robot

    NASA Astrophysics Data System (ADS)

    Mizoguchi, Hiroshi; Shigehara, Takaomi; Goto, Yoshiyasu; Hidai, Ken-ichi; Mishima, Taketoshi

    1999-01-01

    This paper proposes a person accompanying robot as a novel human collaborative robot. The person accompanying robot is such legged mobile robot that is possible to follow the person utilizing its vision. towards future aging society, human collaboration and human support are required as novel applications of robots. Such human collaborative robots share the same space with humans. But conventional robots are isolated from humans and lack the capability to observe humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. To collaborate and support humans properly human collaborative robot must have capability to observe and recognize humans. Study on human observing function of robot is crucial to realize novel robot such as service and pet robot. The authors are currently implementing a prototype of the proposed accompanying robot.As a base for the human observing function of the prototype robot, we have realized face tracking utilizing skin color extraction and correlation based tracking. We also develop a method for the robot to pick up human voice clearly and remotely by utilizing microphone arrays. Results of these preliminary study suggest feasibility of the proposed robot.

  13. Limited Scope Design Study for Multi-Sensor Towbody

    DTIC Science & Technology

    2016-06-01

    FINAL REPORT Limited Scope Design Study for Multi-Sensor Towbody SERDP Project MR-2501 JUNE 2016 Dr. Kevin Williams Tim McGinnis...prepared under contract to the Department of Defense Strategic Environmental Research and Development Program (SERDP). The publication of this...Left Blank REPORT DOCUMENTATION PAGE Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 Form Approved OMB No. 0704-0188 The public

  14. Formulating an image matching strategy for terrestrial stem data collection using a multisensor video system

    Treesearch

    Neil A. Clark

    2001-01-01

    A multisensor video system has been developed incorporating a CCD video camera, a 3-axis magnetometer, and a laser-rangefinding device, for the purpose of measuring individual tree stems. While preliminary results show promise, some changes are needed to improve the accuracy and efficiency of the system. Image matching is needed to improve the accuracy of length...

  15. Multisensor fusion for the detection of mines and minelike targets

    NASA Astrophysics Data System (ADS)

    Hanshaw, Terilee

    1995-06-01

    The US Army's Communications and Electronics Command through the auspices of its Night Vision and Electronics Sensors Directorate (CECOM-NVESD) is actively applying multisensor techniques to the detection of mine targets. This multisensor research results from the 'detection activity' with its broad range of operational conditions and targets. Multisensor operation justifies significant attention by yielding high target detection and low false alarm statistics. Furthermore, recent advances in sensor and computing technologies make its practical application realistic and affordable. The mine detection field-of-endeavor has since its WWI baptismal investigated the known spectra for applicable mine observation phenomena. Countless sensors, algorithms, processors, networks, and other techniques have been investigated to determine candidacy for mine detection. CECOM-NVESD efforts have addressed a wide range of sensors spanning the spectrum from gravity field perturbations, magentic field disturbances, seismic sounding, electromagnetic fields, earth penetrating radar imagery, and infrared/visible/ultraviolet surface imaging technologies. Supplementary analysis has considered sensor candidate applicability by testing under field conditions (versus laboratory), in determination of fieldability. As these field conditions directly effect the probability of detection and false alarms, sensor employment and design must be considered. Consequently, as a given sensor's performance is influenced directly by the operational conditions, tradeoffs are necessary. At present, mass produced and fielded mine detection techniques are limited to those incorporating a single sensor/processor methodology such as, pulse induction and megnetometry, as found in hand held detectors. The most sensitive fielded systems can detect minute metal components in small mine targets but result in very high false alarm rates reducing velocity in operation environments. Furthermore, the actual speed of advance for the entire mission (convoy, movement to engagement, etc.) is determined by the level of difficulty presented in clearance or avoidance activities required in response to the potential 'targets' marked throughout a detection activity. Therefore the application of fielded hand held systems to convoy operations in clearly impractical. CECOM-NVESD efforts are presently seeking to overcome these operational limitations by substantially increasing speed of detection while reducing the false alarm rate through the application of multisensor techniques. The CECOM-NVESD application of multisensor techniques through integration/fusion methods will be defined in this paper.

  16. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    PubMed

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  17. Bearing-based localization for leader-follower formation control

    PubMed Central

    Han, Qing; Ren, Shan; Lang, Hao; Zhang, Changliang

    2017-01-01

    The observability of the leader robot system and the leader-follower formation control are studied. First, the nonlinear observability is studied for when the leader robot observes landmarks. Second, the system is shown to be completely observable when the leader robot observes two different landmarks. When the leader robot system is observable, multi-robots can rapidly form and maintain a formation based on the bearing-only information that the follower robots observe from the leader robot. Finally, simulations confirm the effectiveness of the proposed formation control. PMID:28426706

  18. Remote Sensing-Based, 5-m, Vegetation Distributions, Kougarok Study Site, Seward Peninsula, Alaska, ca. 2009 - 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langford, Zachary; Kumar, Jitendra; Hoffman, Forrest

    A multi-sensor remote sensing-based deep learning approach was developed for generating high-resolution (5~m) vegetation maps for the western Alaskan Arctic on the Seward Peninsula, Alaska. This data was developed using the fusion of hyperspectral, multispectral, and terrain datasets. The current data is located in the Kougarok watershed but we plan to expand this over the Seward Peninsula.

  19. Characteristics of Behavior of Robots with Emotion Model

    NASA Astrophysics Data System (ADS)

    Sato, Shigehiko; Nozawa, Akio; Ide, Hideto

    Cooperated multi robots system has much dominance in comparison with single robot system. It is able to adapt to various circumstances and has a flexibility for variation of tasks. However it has still problems to control each robot, though methods for control multi robots system have been studied. Recently, the robots have been coming into real scene. And emotion and sensitivity of the robots have been widely studied. In this study, human emotion model based on psychological interaction was adapt to multi robots system to achieve methods for organization of multi robots. The characteristics of behavior of multi robots system achieved through computer simulation were analyzed. As a result, very complexed and interesting behavior was emerged even though it has rather simple configuration. And it has flexiblity in various circumstances. Additional experiment with actual robots will be conducted based on the emotion model.

  20. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  1. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    PubMed

    Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  2. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning

    PubMed Central

    Chung, Michael Jae-Yoon; Friesen, Abram L.; Fox, Dieter; Meltzoff, Andrew N.; Rao, Rajesh P. N.

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366

  3. [Force-based local navigation in robot-assisted implantation bed anlage in the lateral skull base. An experimental study].

    PubMed

    Plinkert, P K; Federspil, P A; Plinkert, B; Henrich, D

    2002-03-01

    Excellent precision, miss of retiring, reproducibility are main characteristics of robots in the operating theatre. Because of these facts their use for surgery in the lateral scull base is of great interest. In recent experiments we determined process parameters for robot assisted reaming of a cochlea implant bed and for a mastoidectomy. These results suggested that optimizing parameters for thrilling with the robot is needed. Therefore we implemented a suitable reaming curve from the geometrical data of the implant and a force controlled process control for robot assisted reaming at the lateral scull base. Experiments were performed with an industrial robot on animal and human scull base specimen. Because of online force detection and feedback of sensory data the reaming with the robot was controlled. With increasing force values above a defined limit feed rates were automatically regulated. Furthermore we were able to detect contact of the thrill to dura mater by analyzing the force values. With the new computer program the desired implant bed was exactly prepared. Our examinations showed a successful reaming of an implant bed in the lateral scull base with a robot. Because of a force controlled reaming process locale navigation is possible and enables careful thrilling with a robot.

  4. Laser-Based Pedestrian Tracking in Outdoor Environments by Multiple Mobile Robots

    PubMed Central

    Ozaki, Masataka; Kakimuma, Kei; Hashimoto, Masafumi; Takahashi, Kazuhiko

    2012-01-01

    This paper presents an outdoors laser-based pedestrian tracking system using a group of mobile robots located near each other. Each robot detects pedestrians from its own laser scan image using an occupancy-grid-based method, and the robot tracks the detected pedestrians via Kalman filtering and global-nearest-neighbor (GNN)-based data association. The tracking data is broadcast to multiple robots through intercommunication and is combined using the covariance intersection (CI) method. For pedestrian tracking, each robot identifies its own posture using real-time-kinematic GPS (RTK-GPS) and laser scan matching. Using our cooperative tracking method, all the robots share the tracking data with each other; hence, individual robots can always recognize pedestrians that are invisible to any other robot. The simulation and experimental results show that cooperating tracking provides the tracking performance better than conventional individual tracking does. Our tracking system functions in a decentralized manner without any central server, and therefore, this provides a degree of scalability and robustness that cannot be achieved by conventional centralized architectures. PMID:23202171

  5. A tracked robot with novel bio-inspired passive "legs".

    PubMed

    Sun, Bo; Jing, Xingjian

    2017-01-01

    For track-based robots, an important aspect is the suppression design, which determines the trafficability and comfort of the whole system. The trafficability limits the robot's working capability, and the riding comfort limits the robot's working effectiveness, especially with some sensitive instruments mounted on or operated. To these aims, a track-based robot equipped with a novel passive bio-inspired suspension is designed and studied systematically in this paper. Animal or insects have very special leg or limb structures which are good for motion control and adaptable to different environments. Inspired by this, a new track-based robot is designed with novel "legs" for connecting the loading wheels to the robot body. Each leg is designed with passive structures and can achieve very high loading capacity but low dynamic stiffness such that the robot can move on rough ground similar to a multi-leg animal or insect. Therefore, the trafficability and riding comfort can be significantly improved without losing loading capacity. The new track-based robot can be well applied to various engineering tasks for providing a stable moving platform of high mobility, better trafficability and excellent loading capacity.

  6. System design of a hand-held mobile robot for craniotomy.

    PubMed

    Kane, Gavin; Eggers, Georg; Boesecke, Robert; Raczkowsky, Jörg; Wörn, Heinz; Marmulla, Rüdiger; Mühling, Joachim

    2009-01-01

    This contribution reports the development and initial testing of a Mobile Robot System for Surgical Craniotomy, the Craniostar. A kinematic system based on a unicycle robot is analysed to provide local positioning through two spiked wheels gripping directly onto a patients skull. A control system based on a shared control system between both the Surgeon and Robot is employed in a hand-held design that is tested initially on plastic phantom and swine skulls. Results indicate that the system has substantially lower risk than present robotically assisted craniotomies, and despite being a hand-held mobile robot, the Craniostar is still capable of sub-millimetre accuracy in tracking along a trajectory and thus achieving an accurate transfer of pre-surgical plan to the operating room procedure, without the large impact of current medical robots based on modified industrial robots.

  7. Towards Simpler Custom and OpenSearch Services for Voluminous NEWS Merged A-Train Data (Invited)

    NASA Astrophysics Data System (ADS)

    Hua, H.; Fetzer, E.; Braverman, A. J.; Lewis, S.; Henderson, M. L.; Guillaume, A.; Lee, S.; de La Torre Juarez, M.; Dang, H. T.

    2010-12-01

    To simplify access to large and complex satellite data sets for climate analysis and model verification, we developed web services that is used to study long-term and global-scale trends in climate, water and energy cycle, and weather variability. A related NASA Energy and Water Cycle Study (NEWS) task has created a merged NEWS Level 2 data from multiple instruments in NASA’s A-Train constellation of satellites. We used this data to enable creation of climatologies that include correlation between observed temperature, water vapor and cloud properties from the A-Train sensors. Instead of imposing on the user an often rigid and limiting web-based analysis environment, we recognize the need for simple and well-designed services so that users can perform analysis in their own familiar computing environments. Custom on-demand services were developed to improve data accessibility of voluminous multi-sensor data. Services enabling geospatial, geographical, and multi-sensor parameter subsets of the data, as well a custom time-averaged Level 3 service will be presented. We will also show how a Level 3Q data reduction approach can be used to help “browse” the voluminous multi-sensor Level 2 data. An OpenSearch capability with full text + space + time search of data products will also be presented as an approach to facilitated interoperability with other data systems. We will present our experiences for improving user usability as well as strategies for facilitating interoperability with other data systems.

  8. Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving

    PubMed Central

    Elfring, Jos; Appeldoorn, Rein; van den Dries, Sjoerd; Kwakkernaat, Maurice

    2016-01-01

    The number of perception sensors on automated vehicles increases due to the increasing number of advanced driver assistance system functions and their increasing complexity. Furthermore, fail-safe systems require redundancy, thereby increasing the number of sensors even further. A one-size-fits-all multisensor data fusion architecture is not realistic due to the enormous diversity in vehicles, sensors and applications. As an alternative, this work presents a methodology that can be used to effectively come up with an implementation to build a consistent model of a vehicle’s surroundings. The methodology is accompanied by a software architecture. This combination minimizes the effort required to update the multisensor data fusion system whenever sensors or applications are added or replaced. A series of real-world experiments involving different sensors and algorithms demonstrates the methodology and the software architecture. PMID:27727171

  9. Research and implementation of a new 6-DOF light-weight robot

    NASA Astrophysics Data System (ADS)

    Tao, Zihang; Zhang, Tao; Qi, Mingzhong; Ji, Junhui

    2017-06-01

    Traditional industrial robots have some weaknesses such as low payload-weight, high power consumption and high cost. These drawbacks limit their applications in such areas, special application, service and surgical robots. To improve these shortcomings, a new kind 6-DOF light-weight robot was designed based on modular joints and modular construction. This paper discusses the general requirements of the light-weight robots. Based on these requirements the novel robot is designed. The new robot is described from two aspects, mechanical design and control system. A prototype robot had developed and a joint performance test platform had designed. Position and velocity tests had conducted to evaluate the performance of the prototype robot. Test results showed that the prototype worked well.

  10. Implementation of a landslide early warning system based on near-real-time monitoring, multisensor mapping and geophysical measurements

    NASA Astrophysics Data System (ADS)

    Teza, Giordano; Galgaro, Antonio; Francese, Roberto; Ninfo, Andrea; Mariani, Rocco

    2017-04-01

    An early warning system has been implemented to monitor the Perarolo di Cadore landslide (North-Eastern Italian Alps), which is a slump whose induced risk is fairly high because a slope collapse could form a temporary dam on the underlying torrent and, therefore, could directly threaten the close village. A robotic total station (RTS) measures, with 6h returning time, the positions of 23 retro-reflectors placed on the landslide upper and middle sectors. The landslide's kinematical behavior derived from these near-real-time (NRT) surface displacements is interpreted on the basis of available geomorphological and geological information, geometrical data provided by some laser scanning and photogrammetric surveys, and a landslide model obtained by means of 3D Electrical Resistivity Tomography (3D ERT) measurements. In this way, an analysis of the time series provided by RTS and a pluviometer, which cover several years, allows the definition of some pre-alert and alert kinematical and rainfall thresholds. These thresholds, as well as the corresponding operational recommendations, are currently used for early warning purposes by Authorities involved in risk management for the Perarolo landslide. It should be noted the fact that, as new RTS and pluviometric data are available, the thresholds can be updated and, therefore, a fine tuning of the early warning system can be carried out in order to improve its performance. Although the proposed approach has been implemented in a particular case, it can be used to develop an early warning system based on NRT data in each site where a landslide threatens infrastructures and/or villages that cannot be relocated.

  11. Hypothesis Testing Using Spatially Dependent Heavy Tailed Multisensor Data

    DTIC Science & Technology

    2014-12-01

    Office of Research 113 Bowne Hall Syracuse, NY 13244 -1200 ABSTRACT HYPOTHESIS TESTING USING SPATIALLY DEPENDENT HEAVY-TAILED MULTISENSOR DATA Report...consistent with the null hypothesis of linearity and can be used to estimate the distribution of a test statistic that can discrimi- nate between the null... Test for nonlinearity. Histogram is generated using the surrogate data. The statistic of the original time series is represented by the solid line

  12. Geometric Factors in Target Positioning and Tracking

    DTIC Science & Technology

    2009-07-01

    Shalom and X.R. Li, Multitarget-Multisensor Tracking: Principles and Techniques, YBS Publishing, Storrs, CT, 1995. [2] S. Blackman and R. Popoli, Design...Multitarget-Multisensor Tracking: Applications and Advances, Vol.2, Y. Bar- Shalom (Ed.), 325-392, Artech House, Norwood, MA, 1999. [10] B. Ristic...R. Yarlagadda, I. Ali , N. Al-Dhahir, and J. Hershey, “GPS GDOP Metric,” IEE Proc. Radar, Sonar Navig, 147(5), Oct. 2000. [14] A. Kelly

  13. Multisensor benchmark data for riot control

    NASA Astrophysics Data System (ADS)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  14. Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots

    NASA Astrophysics Data System (ADS)

    WANG, Wei; WANG, Lei; YUN, Chao

    2017-03-01

    Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.

  15. Development of Conductive Polymer Analysis for the Rapid Detection and Identification of Phytopathogenic Microbes

    Treesearch

    A. Dan Wilson; D.G. Lester; C.S. Oberle

    2004-01-01

    Conductive polymer analysis, a type of electronic aroma detection technology, was evaluated for its efficacy in the detection, identification, and discrimination of plant-pathogenic microorganisms on standardized media and in diseased plant tissues. The method is based on the acquisition of a diagnostic electronic fingerprint derived from multisensor responses to...

  16. New Approaches to the Use and Integration of Multi-Sensor Remote Sensing for Historic Resource Identification and Evaluation

    DTIC Science & Technology

    2006-11-10

    features based on shape are easy to come by. The Great Pyramids at Giza are instantly identified from space, even at the very coarse spatial... Pyramids at Giza , Egypt, are recognized by their triangular faces in this 1 m resolution Ikonos image, as are nearby rectangular tombs (credit: Space

  17. Improving long-term global precipitation dataset using multi-sensor surface soil moisture retrievals and the soil moisture analysis rainfall tool (SMART)

    USDA-ARS?s Scientific Manuscript database

    Using multiple historical satellite surface soil moisture products, the Kalman Filtering-based Soil Moisture Analysis Rainfall Tool (SMART) is applied to improve the accuracy of a multi-decadal global daily rainfall product that has been bias-corrected to match the monthly totals of available rain g...

  18. Multi-Sensor Triangulation of Multi-Source Spatial Data

    NASA Technical Reports Server (NTRS)

    Habib, Ayman; Kim, Chang-Jae; Bang, Ki-In

    2007-01-01

    The introduced methodologies are successful in: a) Ising LIDAR features for photogrammetric geo-refererncing; b) Delivering a geo-referenced imagery of the same quality as point-based geo-referencing procedures; c) Taking advantage of the synergistic characteristics of spatial data acquisition systems. The triangulation output can be used for the generation of 3-D perspective views.

  19. Human Perceptual Performance With Nonliteral Imagery: Region Recognition and Texture-Based Segmentation

    ERIC Educational Resources Information Center

    Essock, Edward A.; Sinai, Michael J.; DeFord, Kevin; Hansen, Bruce C.; Srinivasan, Narayanan

    2004-01-01

    In this study the authors address the issue of how the perceptual usefulness of nonliteral imagery should be evaluated. Perceptual performance with nonliteral imagery of natural scenes obtained at night from infrared and image-intensified sensors and from multisensor fusion methods was assessed to relate performance on 2 basic perceptual tasks to…

  20. Design and implementation of self-balancing coaxial two wheel robot based on HSIC

    NASA Astrophysics Data System (ADS)

    Hu, Tianlian; Zhang, Hua; Dai, Xin; Xia, Xianfeng; Liu, Ran; Qiu, Bo

    2007-12-01

    This thesis has studied the control problem concerning position and orientation control of self-balancing coaxial two wheel robot based on the human simulated intelligent control (HSIC) theory. Adopting Lagrange equation, the dynamic model of self-balancing coaxial two-wheel Robot is built up, and the Sensory-motor Intelligent Schemas (SMIS) of HSIC controller for the robot is designed by analyzing its movement and simulating the human controller. In robot's motion process, by perceiving position and orientation of the robot and using multi-mode control strategy based on characteristic identification, the HSIC controller enables the robot to control posture. Utilizing Matlab/Simulink, a simulation platform is established and a motion controller is designed and realized based on RT-Linux real-time operating system, employing high speed ARM9 processor S3C2440 as kernel of the motion controller. The effectiveness of the new design is testified by the experiment.

  1. Development of a soft untethered robot using artificial muscle actuators

    NASA Astrophysics Data System (ADS)

    Cao, Jiawei; Qin, Lei; Lee, Heow Pueh; Zhu, Jian

    2017-04-01

    Soft robots have attracted much interest recently, due to their potential capability to work effectively in unstructured environment. Soft actuators are key components in soft robots. Dielectric elastomer actuators are one class of soft actuators, which can deform in response to voltage. Dielectric elastomer actuators exhibit interesting attributes including large voltage-induced deformation and high energy density. These attributes make dielectric elastomer actuators capable of functioning as artificial muscles for soft robots. It is significant to develop untethered robots, since connecting the cables to external power sources greatly limits the robots' functionalities, especially autonomous movements. In this paper we develop a soft untethered robot based on dielectric elastomer actuators. This robot mainly consists of a deformable robotic body and two paper-based feet. The robotic body is essentially a dielectric elastomer actuator, which can expand or shrink at voltage on or off. In addition, the two feet can achieve adhesion or detachment based on the mechanism of electroadhesion. In general, the entire robotic system can be controlled by electricity or voltage. By optimizing the mechanical design of the robot (the size and weight of electric circuits), we put all these components (such as batteries, voltage amplifiers, control circuits, etc.) onto the robotic feet, and the robot is capable of realizing autonomous movements. Experiments are conducted to study the robot's locomotion. Finite element method is employed to interpret the deformation of dielectric elastomer actuators, and the simulations are qualitatively consistent with the experimental observations.

  2. Localization of Mobile Robots Using Odometry and an External Vision Sensor

    PubMed Central

    Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina

    2010-01-01

    This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318

  3. Localization of mobile robots using odometry and an external vision sensor.

    PubMed

    Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina

    2010-01-01

    This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.

  4. Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU

    PubMed Central

    Dou, Lihua; Su, Zhong; Liu, Ning

    2018-01-01

    A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515

  5. Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems

    ERIC Educational Resources Information Center

    Ham, MyungJoo

    2009-01-01

    We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…

  6. [Optimization of end-tool parameters based on robot hand-eye calibration].

    PubMed

    Zhang, Lilong; Cao, Tong; Liu, Da

    2017-04-01

    A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.

  7. Analysis on the workspace of palletizing robot based on AutoCAD

    NASA Astrophysics Data System (ADS)

    Li, Jin-quan; Zhang, Rui; Guan, Qi; Cui, Fang; Chen, Kuan

    2017-10-01

    In this paper, a four-degree-of-freedom articulated palletizing robot is used as the object of research. Based on the analysis of the overall configuration of the robot, the kinematic mathematical model is established by D-H method to figure out the workspace of the robot. In order to meet the needs of design and analysis, using AutoCAD secondary development technology and AutoLisp language to develop AutoCAD-based 2D and 3D workspace simulation interface program of palletizing robot. At last, using AutoCAD plugin, the influence of structural parameters on the shape and position of the working space is analyzed when the structure parameters of the robot are changed separately. This study laid the foundation for the design, control and planning of palletizing robots.

  8. Autonomous Mobile Platform for Research in Cooperative Robotics

    NASA Technical Reports Server (NTRS)

    Daemi, Ali; Pena, Edward; Ferguson, Paul

    1998-01-01

    This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.

  9. A Low Cost Mobile Robot Based on Proportional Integral Derivative (PID) Control System and Odometer for Education

    NASA Astrophysics Data System (ADS)

    Haq, R.; Prayitno, H.; Dzulkiflih; Sucahyo, I.; Rahmawati, E.

    2018-03-01

    In this article, the development of a low cost mobile robot based on PID controller and odometer for education is presented. PID controller and odometer is applied for controlling mobile robot position. Two-dimensional position vector in cartesian coordinate system have been inserted to robot controller as an initial and final position. Mobile robot has been made based on differential drive and sensor magnetic rotary encoder which measured robot position from a number of wheel rotation. Odometry methode use data from actuator movements for predicting change of position over time. The mobile robot is examined to get final position with three different heading angle 30°, 45° and 60° by applying various value of KP, KD and KI constant.

  10. Research on wheelchair robot control system based on EOG

    NASA Astrophysics Data System (ADS)

    Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo

    2018-04-01

    The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.

  11. Mobile robot knowledge base

    NASA Astrophysics Data System (ADS)

    Heath Pastore, Tracy; Barnes, Mitchell; Hallman, Rory

    2005-05-01

    Robot technology is developing at a rapid rate for both commercial and Department of Defense (DOD) applications. As a result, the task of managing both technology and experience information is growing. In the not-to-distant past, tracking development efforts of robot platforms, subsystems and components was not too difficult, expensive, or time consuming. To do the same today is a significant undertaking. The Mobile Robot Knowledge Base (MRKB) provides the robotics community with a web-accessible, centralized resource for sharing information, experience, and technology to more efficiently and effectively meet the needs of the robot system user. The resource includes searchable information on robot components, subsystems, mission payloads, platforms, and DOD robotics programs. In addition, the MRKB website provides a forum for technology and information transfer within the DOD robotics community and an interface for the Robotic Systems Pool (RSP). The RSP manages a collection of small teleoperated and semi-autonomous robotic platforms, available for loan to DOD and other qualified entities. The objective is to put robots in the hands of users and use the test data and fielding experience to improve robot systems.

  12. A Recipe for Soft Fluidic Elastomer Robots

    PubMed Central

    Marchese, Andrew D.; Katzschmann, Robert K.

    2015-01-01

    Abstract This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes. PMID:27625913

  13. A Recipe for Soft Fluidic Elastomer Robots.

    PubMed

    Marchese, Andrew D; Katzschmann, Robert K; Rus, Daniela

    2015-03-01

    This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes.

  14. Method and System for Controlling a Dexterous Robot Execution Sequence Using State Classification

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Quillin, Nathaniel (Inventor); Platt, Robert J., Jr. (Inventor); Pfeiffer, Joseph (Inventor); Permenter, Frank Noble (Inventor)

    2014-01-01

    A robotic system includes a dexterous robot and a controller. The robot includes a plurality of robotic joints, actuators for moving the joints, and sensors for measuring a characteristic of the joints, and for transmitting the characteristics as sensor signals. The controller receives the sensor signals, and is configured for executing instructions from memory, classifying the sensor signals into distinct classes via the state classification module, monitoring a system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the system state. A method for controlling the robot in the above system includes receiving the signals via the controller, classifying the signals using the state classification module, monitoring the present system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the present system state.

  15. Intelligent multi-sensor integrations

    NASA Technical Reports Server (NTRS)

    Volz, Richard A.; Jain, Ramesh; Weymouth, Terry

    1989-01-01

    Growth in the intelligence of space systems requires the use and integration of data from multiple sensors. Generic tools are being developed for extracting and integrating information obtained from multiple sources. The full spectrum is addressed for issues ranging from data acquisition, to characterization of sensor data, to adaptive systems for utilizing the data. In particular, there are three major aspects to the project, multisensor processing, an adaptive approach to object recognition, and distributed sensor system integration.

  16. Stochastic model for threat assessment in multi-sensor defense system

    NASA Astrophysics Data System (ADS)

    Wang, Yongcheng; Wang, Hongfei; Jiang, Changsheng

    2007-11-01

    This paper puts forward a stochastic model for target detecting and tracking in multi-sensor defense systems and applies the Lanchester differential equations to threat assessment in combat. The two different modes of targets tracking and their respective Lanchester differential equations are analyzed and established. By use of these equations, we could briefly estimate the loss of each combat side and accordingly get the threat estimation results, given the situation analysis is accomplished.

  17. Advances in Multi-Sensor Information Fusion: Theory and Applications 2017.

    PubMed

    Jin, Xue-Bo; Sun, Shuli; Wei, Hong; Yang, Feng-Bao

    2018-04-11

    The information fusion technique can integrate a large amount of data and knowledge representing the same real-world object and obtain a consistent, accurate, and useful representation of that object. The data may be independent or redundant, and can be obtained by different sensors at the same time or at different times. A suitable combination of investigative methods can substantially increase the profit of information in comparison with that from a single sensor. Multi-sensor information fusion has been a key issue in sensor research since the 1970s, and it has been applied in many fields. For example, manufacturing and process control industries can generate a lot of data, which have real, actionable business value. The fusion of these data can greatly improve productivity through digitization. The goal of this special issue is to report innovative ideas and solutions for multi-sensor information fusion in the emerging applications era, focusing on development, adoption, and applications.

  18. A Vision for an International Multi-Sensor Snow Observing Mission

    NASA Technical Reports Server (NTRS)

    Kim, Edward

    2015-01-01

    Discussions within the international snow remote sensing community over the past two years have led to encouraging consensus regarding the broad outlines of a dedicated snow observing mission. The primary consensus - that since no single sensor type is satisfactory across all snow types and across all confounding factors, a multi-sensor approach is required - naturally leads to questions about the exact mix of sensors, required accuracies, and so on. In short, the natural next step is to collect such multi-sensor snow observations (with detailed ground truth) to enable trade studies of various possible mission concepts. Such trade studies must assess the strengths and limitations of heritage as well as newer measurement techniques with an eye toward natural sensitivity to desired parameters such as snow depth and/or snow water equivalent (SWE) in spite of confounding factors like clouds, lack of solar illumination, forest cover, and topography, measurement accuracy, temporal and spatial coverage, technological maturity, and cost.

  19. Multi-Sensor Characterization of the Boreal Forest: Initial Findings

    NASA Technical Reports Server (NTRS)

    Reith, Ernest; Roberts, Dar A.; Prentiss, Dylan

    2001-01-01

    Results are presented in an initial apriori knowledge approach toward using complementary multi-sensor multi-temporal imagery in characterizing vegetated landscapes over a site in the Boreal Ecosystem-Atmosphere Study (BOREAS). Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Airborne Synthetic Aperture Radar (AIRSAR) data were segmented using multiple endmember spectral mixture analysis and binary decision tree approaches. Individual date/sensor land cover maps had overall accuracies between 55.0% - 69.8%. The best eight land cover layers from all dates and sensors correctly characterized 79.3% of the cover types. An overlay approach was used to create a final land cover map. An overall accuracy of 71.3% was achieved in this multi-sensor approach, a 1.5% improvement over our most accurate single scene technique, but 8% less than the original input. Black spruce was evaluated to be particularly undermapped in the final map possibly because it was also contained within jack pine and muskeg land coverages.

  20. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-07-04

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  1. Joint FACET: the Canada-Netherlands initiative to study multisensor data fusion systems

    NASA Astrophysics Data System (ADS)

    Bosse, Eloi; Theil, Arne; Roy, Jean; Huizing, Albert G.; van Aartsen, Simon

    1998-09-01

    This paper presents the progress of a collaborative effort between Canada and The Netherlands in analyzing multi-sensor data fusion systems, e.g. for potential application to their respective frigates. In view of the overlapping interest in studying and comparing applicability and performance and advanced state-of-the-art Multi-Sensor Data FUsion (MSDF) techniques, the two research establishments involved have decided to join their efforts in the development of MSDF testbeds. This resulted in the so-called Joint-FACET, a highly modular and flexible series of applications that is capable of processing both real and synthetic input data. Joint-FACET allows the user to create and edit test scenarios with multiple ships, sensor and targets, generate realistic sensor outputs, and to process these outputs with a variety of MSDF algorithms. These MSDF algorithms can also be tested using typical experimental data collected during live military exercises.

  2. Learning gait of quadruped robot without prior knowledge of the environment

    NASA Astrophysics Data System (ADS)

    Xu, Tao; Chen, Qijun

    2012-09-01

    Walking is the basic skill of a legged robot, and one of the promising ways to improve the walking performance and its adaptation to environment changes is to let the robot learn its walking by itself. Currently, most of the walking learning methods are based on robot vision system or some external sensing equipment to estimate the walking performance of certain walking parameters, and therefore are usually only applicable under laboratory condition, where environment can be pre-defined. Inspired by the rhythmic swing movement during walking of legged animals and the behavior of their adjusting their walking gait on different walking surfaces, a concept of walking rhythmic pattern(WRP) is proposed to evaluate the walking specialty of legged robot, which is just based on the walking dynamics of the robot. Based on the onboard acceleration sensor data, a method to calculate WRP using power spectrum in frequency domain and diverse smooth filters is also presented. Since the evaluation of WRP is only based on the walking dynamics data of the robot's body, the proposed method doesn't require prior knowledge of environment and thus can be applied in unknown environment. A gait learning approach of legged robots based on WRP and evolution algorithm(EA) is introduced. By using the proposed approach, a quadruped robot can learn its locomotion by its onboard sensing in an unknown environment, where the robot has no prior knowledge about this place. The experimental result proves proportional relationship exits between WRP match score and walking performance of legged robot, which can be used to evaluate the walking performance in walking optimization under unknown environment.

  3. Coordinated Control Of Mobile Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Applicable to variety of mobile robotic manipulators, including robots that move along tracks (typically, painting and welding robots), robots mounted on gantries and capable of moving in all three dimensions, wheeled robots, and compound robots (consisting of robots mounted on other robots). Theoretical basis discussed in several prior articles in NASA Tech Briefs, including "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes With Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).

  4. Optimal Control Method of Robot End Position and Orientation Based on Dynamic Tracking Measurement

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Xu, Lijuan

    2018-01-01

    In order to improve the accuracy of robot pose positioning and control, this paper proposed a dynamic tracking measurement robot pose optimization control method based on the actual measurement of D-H parameters of the robot, the parameters is taken with feedback compensation of the robot, according to the geometrical parameters obtained by robot pose tracking measurement, improved multi sensor information fusion the extended Kalan filter method, with continuous self-optimal regression, using the geometric relationship between joint axes for kinematic parameters in the model, link model parameters obtained can timely feedback to the robot, the implementation of parameter correction and compensation, finally we can get the optimal attitude angle, realize the robot pose optimization control experiments were performed. 6R dynamic tracking control of robot joint robot with independent research and development is taken as experimental subject, the simulation results show that the control method improves robot positioning accuracy, and it has the advantages of versatility, simplicity, ease of operation and so on.

  5. Research on Robot Pose Control Technology Based on Kinematics Analysis Model

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Xu, Lijuan

    2018-01-01

    In order to improve the attitude stability of the robot, proposes an attitude control method of robot based on kinematics analysis model, solve the robot walking posture transformation, grasping and controlling the motion planning problem of robot kinematics. In Cartesian space analytical model, using three axis accelerometer, magnetometer and the three axis gyroscope for the combination of attitude measurement, the gyroscope data from Calman filter, using the four element method for robot attitude angle, according to the centroid of the moving parts of the robot corresponding to obtain stability inertia parameters, using random sampling RRT motion planning method, accurate operation to any position control of space robot, to ensure the end effector along a prescribed trajectory the implementation of attitude control. The accurate positioning of the experiment is taken using MT-R robot as the research object, the test robot. The simulation results show that the proposed method has better robustness, and higher positioning accuracy, and it improves the reliability and safety of robot operation.

  6. Full autonomous microline trace robot

    NASA Astrophysics Data System (ADS)

    Yi, Deer; Lu, Si; Yan, Yingbai; Jin, Guofan

    2000-10-01

    Optoelectric inspection may find applications in robotic system. In micro robotic system, smaller optoelectric inspection system is preferred. However, as miniaturizing the size of the robot, the number of the optoelectric detector becomes lack. And lack of the information makes the micro robot difficult to acquire its status. In our lab, a micro line trace robot has been designed, which autonomous acts based on its optoelectric detection. It has been programmed to follow a black line printed on the white colored ground. Besides the optoelectric inspection, logical algorithm in the microprocessor is also important. In this paper, we propose a simply logical algorithm to realize robot's intelligence. The robot's intelligence is based on a AT89C2051 microcontroller which controls its movement. The technical details of the micro robot are as follow: dimension: 30mm*25mm*35*mm; velocity: 60mm/s.

  7. Control of free-flying space robot manipulator systems

    NASA Technical Reports Server (NTRS)

    Cannon, Robert H., Jr.

    1989-01-01

    Control techniques for self-contained, autonomous free-flying space robots are being tested and developed. Free-flying space robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require astronaut extra-vehicular activity (EVA). Use of robots will provide economic savings as well as improved astronaut safety by reducing and in many cases, eliminating the need for human EVA. The focus of the work is to develop and carry out a set of research projects using laboratory models of satellite robots. These devices use air-cushion-vehicle (ACV) technology to simulate in two dimensions the drag-free, zero-g conditions of space. Current work is divided into six major projects or research areas. Fixed-base cooperative manipulation work represents our initial entry into multiple arm cooperation and high-level control with a sophisticated user interface. The floating-base cooperative manipulation project strives to transfer some of the technologies developed in the fixed-base work onto a floating base. The global control and navigation experiment seeks to demonstrate simultaneous control of the robot manipulators and the robot base position so that tasks can be accomplished while the base is undergoing a controlled motion. The multiple-vehicle cooperation project's goal is to demonstrate multiple free-floating robots working in teams to carry out tasks too difficult or complex for a single robot to perform. The Location Enhancement Arm Push-off (LEAP) activity's goal is to provide a viable alternative to expendable gas thrusters for vehicle propulsion wherein the robot uses its manipulators to throw itself from place to place. Because the successful execution of the LEAP technique requires an accurate model of the robot and payload mass properties, it was deemed an attractive testbed for adaptive control technology.

  8. A Strapdown Interial Navigation System/Beidou/Doppler Velocity Log Integrated Navigation Algorithm Based on a Cubature Kalman Filter

    PubMed Central

    Gao, Wei; Zhang, Ya; Wang, Jianguo

    2014-01-01

    The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor's information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient. PMID:24434842

  9. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  10. A multi-sensor remote sensing approach for measuring primary production from space

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine

    1989-01-01

    It is proposed to develop a multi-sensor remote sensing method for computing marine primary productivity from space, based on the capability to measure the primary ocean variables which regulate photosynthesis. The three variables and the sensors which measure them are: (1) downwelling photosynthetically available irradiance, measured by the VISSR sensor on the GOES satellite, (2) sea-surface temperature from AVHRR on NOAA series satellites, and (3) chlorophyll-like pigment concentration from the Nimbus-7/CZCS sensor. These and other measured variables would be combined within empirical or analytical models to compute primary productivity. With this proposed capability of mapping primary productivity on a regional scale, we could begin realizing a more precise and accurate global assessment of its magnitude and variability. Applications would include supplementation and expansion on the horizontal scale of ship-acquired biological data, which is more accurate and which supplies the vertical components of the field, monitoring oceanic response to increased atmospheric carbon dioxide levels, correlation with observed sedimentation patterns and processes, and fisheries management.

  11. Deep learning decision fusion for the classification of urban remote sensing data

    NASA Astrophysics Data System (ADS)

    Abdi, Ghasem; Samadzadegan, Farhad; Reinartz, Peter

    2018-01-01

    Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral-spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.

  12. Game Design to Measure Reflexes and Attention Based on Biofeedback Multi-Sensor Interaction

    PubMed Central

    Ortiz-Vigon Uriarte, Inigo de Loyola; Garcia-Zapirain, Begonya; Garcia-Chimeno, Yolanda

    2015-01-01

    This paper presents a multi-sensor system for implementing biofeedback as a human-computer interaction technique in a game involving driving cars in risky situations. The sensors used are: Eye Tracker, Kinect, pulsometer, respirometer, electromiography (EMG) and galvanic skin resistance (GSR). An algorithm has been designed which gives rise to an interaction logic with the game according to the set of physiological constants obtained from the sensors. The results reflect a 72.333 response to the System Usability Scale (SUS), a significant difference of p = 0.026 in GSR values in terms of the difference between the start and end of the game, and an r = 0.659 and p = 0.008 correlation while playing with the Kinect between the breathing level and the energy and joy factor. All the sensors used had an impact on the end results, whereby none of them should be disregarded in future lines of research, even though it would be interesting to obtain separate breathing values from that of the cardio. PMID:25789493

  13. Merging climate and multi-sensor time-series data in real-time drought monitoring across the U.S.A.

    USGS Publications Warehouse

    Brown, Jesslyn F.; Miura, T.; Wardlow, B.; Gu, Yingxin

    2011-01-01

    Droughts occur repeatedly in the United States resulting in billions of dollars of damage. Monitoring and reporting on drought conditions is a necessary function of government agencies at multiple levels. A team of Federal and university partners developed a drought decision- support tool with higher spatial resolution relative to traditional climate-based drought maps. The Vegetation Drought Response Index (VegDRI) indicates general canopy vegetation condition assimilation of climate, satellite, and biophysical data via geospatial modeling. In VegDRI, complementary drought-related data are merged to provide a comprehensive, detailed representation of drought stress on vegetation. Time-series data from daily polar-orbiting earth observing systems [Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS)] providing global measurements of land surface conditions are ingested into VegDRI. Inter-sensor compatibility is required to extend multi-sensor data records; thus, translations were developed using overlapping observations to create consistent, long-term data time series. 

  14. Simulation of olive grove gross primary production by the combination of ground and multi-sensor satellite data

    NASA Astrophysics Data System (ADS)

    Brilli, L.; Chiesi, M.; Maselli, F.; Moriondo, M.; Gioli, B.; Toscano, P.; Zaldei, A.; Bindi, M.

    2013-08-01

    We developed and tested a methodology to estimate olive (Olea europaea L.) gross primary production (GPP) combining ground and multi-sensor satellite data. An eddy-covariance station placed in an olive grove in central Italy provided carbon and water fluxes over two years (2010-2011), which were used as reference to evaluate the performance of a GPP estimation methodology based on a Monteith type model (modified C-Fix) and driven by meteorological and satellite (NDVI) data. A major issue was related to the consideration of the two main olive grove components, i.e. olive trees and inter-tree ground vegetation: this issue was addressed by the separate simulation of carbon fluxes within the two ecosystem layers, followed by their recombination. In this way the eddy covariance GPP measurements were successfully reproduced, with the exception of two periods that followed tillage operations. For these periods measured GPP could be approximated by considering synthetic NDVI values which simulated the expected response of inter-tree ground vegetation to tillages.

  15. A Method on Dynamic Path Planning for Robotic Manipulator Autonomous Obstacle Avoidance Based on an Improved RRT Algorithm.

    PubMed

    Wei, Kun; Ren, Bingyin

    2018-02-13

    In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

  16. An egocentric vision based assistive co-robot.

    PubMed

    Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang

    2013-06-01

    We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

  17. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.

    PubMed

    Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong

    2014-09-01

    Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Dynamics and control of robot for capturing objects in space

    NASA Astrophysics Data System (ADS)

    Huang, Panfeng

    Space robots are expected to perform intricate tasks in future space services, such as satellite maintenance, refueling, and replacing the orbital replacement unit (ORU). To realize these missions, the capturing operation may not be avoided. Such operations will encounter some challenges because space robots have some unique characteristics unfound on ground-based robots, such as, dynamic singularities, dynamic coupling between manipulator and space base, limited energy supply and working without a fixed base, and so on. In addition, since contacts and impacts may not be avoided during capturing operation. Therefore, dynamics and control problems of space robot for capturing objects are significant research topics if the robots are to be deployed for the space services. A typical servicing operation mainly includes three phases: capturing the object, berthing and docking the object, then repairing the target. Therefore, this thesis will focus on resolving some challenging problems during capturing the object, berthing and docking, and so on. In this thesis, I study and analyze the dynamics and control problems of space robot for capturing objects. This work has potential impact in space robotic applications. I first study the contact and impact dynamics of space robot and objects. I specifically focus on analyzing the impact dynamics and mapping the relationship of influence and speed. Then, I develop the fundamental theory for planning the minimum-collision based trajectory of space robot and designing the configuration of space robot at the moment of capture. To compensate for the attitude of the space base during the capturing approach operation, a new balance control concept which can effectively balance the attitude of the space base using the dynamic couplings is developed. The developed balance control concept helps to understand of the nature of space dynamic coupling, and can be readily applied to compensate or minimize the disturbance to the space base. After capturing the object, the space robot must complete the following two tasks: one is to berth the object, and the other is to re-orientate the attitude of the whole robot system for communication and power supply. Therefore, I propose a method to accomplish these two tasks simultaneously using manipulator motion only. The ultimate goal of space services is to realize the capture and manipulation autonomously. Therefore, I propose an affective approach based on learning human skill to track and capture the objects automatically in space. With human-teaching demonstration, the space robot is able to learn and abstract human tracking and capturing skill using an efficient neural-network learning architecture that combines flexible Cascade Neural Networks with Node Decoupled Extended Kalman Filtering (CNN-NDEKF). The simulation results attest that this approach is useful and feasible in tracking trajectory planning and capturing of space robot. Finally I propose a novel approach based on Genetic Algorithms (GAs) to optimize the approach trajectory of space robots in order to realize effective and stable operations. I complete the minimum-torque path planning in order to save the limited energy in space, and design the minimum jerk trajectory for the stabilization of the space manipulator and its space base. These optimal algorithms are very important and useful for the application of space robot.

  19. Space-time modeling using environmental constraints in a mobile robot system

    NASA Technical Reports Server (NTRS)

    Slack, Marc G.

    1990-01-01

    Grid-based models of a robot's local environment have been used by many researchers building mobile robot control systems. The attraction of grid-based models is their clear parallel between the internal model and the external world. However, the discrete nature of such representations does not match well with the continuous nature of actions and usually serves to limit the abilities of the robot. This work describes a spatial modeling system that extracts information from a grid-based representation to form a symbolic representation of the robot's local environment. The approach makes a separation between the representation provided by the sensing system and the representation used by the action system. Separation allows asynchronous operation between sensing and action in a mobile robot, as well as the generation of a more continuous representation upon which to base actions.

  20. Open Issues in Evolutionary Robotics.

    PubMed

    Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.

  1. Detection of Iberian ham aroma by a semiconductor multisensorial system.

    PubMed

    Otero, Laura; Horrillo, M A Carmen; García, María; Sayago, Isabel; Aleixandre, Manuel; Fernández, M A Jesús; Arés, Luis; Gutiérrez, Javier

    2003-11-01

    A semiconductor multisensorial system, based on tin oxide, to control the quality of dry-cured Iberian hams is described. Two types of ham (submitted to different drying temperatures) were selected. Good responses were obtained from the 12 elements forming the multisensor for different operating temperatures. Discrimination between the two types of ham was successfully realised through principal component analysis (PCA).

  2. Multi-Sensor Data Fusion Project

    DTIC Science & Technology

    2000-02-28

    seismic network by detecting T phases generated by underground events ( generally earthquakes ) and associating these phases to seismic events. The...between underwater explosions (H), underground sources, mostly earthquake - generated (7), and noise detections (N). The phases classified as H are the only...processing for infrasound sensors is most similar to seismic array processing with the exception that the detections are based on a more sophisticated

  3. Multi-sensor Array for High Altitude Balloon Missions to the Stratosphere

    NASA Astrophysics Data System (ADS)

    Davis, Tim; McClurg, Bryce; Sohl, John

    2008-10-01

    We have designed and built a microprocessor controlled and expandable multi-sensor array for data collection on near space missions. Weber State University has started a high altitude research balloon program called HARBOR. This array has been designed to data log a base set of measurements for every flight and has room for six guest instruments. The base measurements are absolute pressure, on-board temperature, 3-axis accelerometer for attitude measurement, and 2-axis compensated magnetic compass. The system also contains a real time clock and circuitry for logging data directly to a USB memory stick. In typical operation the measurements will be cycled through in sequence and saved to the memory stick along with the clock's time stamp. The microprocessor can be reprogrammed to adapt to guest experiments with either analog or digital interfacing. This system will fly with every mission and will provide backup data collection for other instrumentation for which the primary task is measuring atmospheric pressure and temperature. The attitude data will be used to determine the orientation of the onboard camera systems to aid in identifying features in the images. This will make these images easier to use for any future GIS (geographic information system) remote sensing missions.

  4. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  5. Solid-contact potentiometric sensors and multisensors based on polyaniline and thiacalixarene receptors for the analysis of some beverages and alcoholic drinks

    NASA Astrophysics Data System (ADS)

    Sorvin, Michail; Belyakova, Svetlana; Stoikov, Ivan; Shamagsumova, Rezeda; Evtugyn, Gennady

    2018-04-01

    Electronic tongue is a sensor array that aims to discriminate and analyze complex media like food and beverages on the base of chemometrics approaches for data mining and pattern recognition. In this review, the concept of electronic tongue comprising of solid-contact potentiometric sensors with polyaniline and thacalix[4]arene derivatives is described. The electrochemical reactions of polyaniline as a background of solid-contact sensors and the characteristics of thiacalixarenes and pillararenes as neutral ionophores are briefly considered. The electronic tongue systems described were successfully applied for assessment of fruit juices, green tea, beer and alcoholic drinks They were classified in accordance with the origination, brands and styles. Variation of the sensor response resulted from the reactions between Fe(III) ions added and sample components, i.e., antioxidants and complexing agents. The use of principal component analysis and discriminant analysis is shown for multisensor signal treatment and visualization. The discrimination conditions can be optimized by variation of the ionophores, Fe(III) concentration and sample dilution. The results obtained were compared with other electronic tongue systems reported for the same subjects.

  6. Proposed evaluation framework for assessing operator performance with multisensor displays

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1992-01-01

    Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.

  7. Solid-Contact Potentiometric Sensors and Multisensors Based on Polyaniline and Thiacalixarene Receptors for the Analysis of Some Beverages and Alcoholic Drinks.

    PubMed

    Sorvin, Michail; Belyakova, Svetlana; Stoikov, Ivan; Shamagsumova, Rezeda; Evtugyn, Gennady

    2018-01-01

    Electronic tongue is a sensor array that aims to discriminate and analyze complex media like food and beverages on the base of chemometrics approaches for data mining and pattern recognition. In this review, the concept of electronic tongue comprising of solid-contact potentiometric sensors with polyaniline and thacalix[4]arene derivatives is described. The electrochemical reactions of polyaniline as a background of solid-contact sensors and the characteristics of thiacalixarenes and pillararenes as neutral ionophores are briefly considered. The electronic tongue systems described were successfully applied for assessment of fruit juices, green tea, beer, and alcoholic drinks They were classified in accordance with the origination, brands and styles. Variation of the sensor response resulted from the reactions between Fe(III) ions added and sample components, i.e., antioxidants and complexing agents. The use of principal component analysis and discriminant analysis is shown for multisensor signal treatment and visualization. The discrimination conditions can be optimized by variation of the ionophores, Fe(III) concentration, and sample dilution. The results obtained were compared with other electronic tongue systems reported for the same subjects.

  8. IPS - a vision aided navigation system

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Baumbach, Dirk; Buder, Maximilian; Choinowski, Andre; Ernst, Ines; Funk, Eugen; Grießbach, Denis; Schischmanow, Adrian; Wohlfeil, Jürgen; Zuev, Sergey

    2017-04-01

    Ego localization is an important prerequisite for several scientific, commercial, and statutory tasks. Only by knowing one's own position, can guidance be provided, inspections be executed, and autonomous vehicles be operated. Localization becomes challenging if satellite-based navigation systems are not available, or data quality is not sufficient. To overcome this problem, a team of the German Aerospace Center (DLR) developed a multi-sensor system based on the human head and its navigation sensors - the eyes and the vestibular system. This system is called integrated positioning system (IPS) and contains a stereo camera and an inertial measurement unit for determining an ego pose in six degrees of freedom in a local coordinate system. IPS is able to operate in real time and can be applied for indoor and outdoor scenarios without any external reference or prior knowledge. In this paper, the system and its key hardware and software components are introduced. The main issues during the development of such complex multi-sensor measurement systems are identified and discussed, and the performance of this technology is demonstrated. The developer team started from scratch and transfers this technology into a commercial product right now. The paper finishes with an outlook.

  9. Solid-Contact Potentiometric Sensors and Multisensors Based on Polyaniline and Thiacalixarene Receptors for the Analysis of Some Beverages and Alcoholic Drinks

    PubMed Central

    Sorvin, Michail; Belyakova, Svetlana; Stoikov, Ivan; Shamagsumova, Rezeda; Evtugyn, Gennady

    2018-01-01

    Electronic tongue is a sensor array that aims to discriminate and analyze complex media like food and beverages on the base of chemometrics approaches for data mining and pattern recognition. In this review, the concept of electronic tongue comprising of solid-contact potentiometric sensors with polyaniline and thacalix[4]arene derivatives is described. The electrochemical reactions of polyaniline as a background of solid-contact sensors and the characteristics of thiacalixarenes and pillararenes as neutral ionophores are briefly considered. The electronic tongue systems described were successfully applied for assessment of fruit juices, green tea, beer, and alcoholic drinks They were classified in accordance with the origination, brands and styles. Variation of the sensor response resulted from the reactions between Fe(III) ions added and sample components, i.e., antioxidants and complexing agents. The use of principal component analysis and discriminant analysis is shown for multisensor signal treatment and visualization. The discrimination conditions can be optimized by variation of the ionophores, Fe(III) concentration, and sample dilution. The results obtained were compared with other electronic tongue systems reported for the same subjects. PMID:29740577

  10. Embodied cognition for autonomous interactive robots.

    PubMed

    Hoffman, Guy

    2012-10-01

    In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. Copyright © 2012 Cognitive Science Society, Inc.

  11. MR 201104: Evaluation of Discrimination Technologies and Classification Results and MR 201157: Demonstration of MetalMapper Static Data Acquisition and Data Analysis

    DTIC Science & Technology

    2016-09-23

    Acquisition and Data Analysis). EMI sensors, MetalMapper, man-portable Time-domain Electromagnetic Multi-sensor Towed Array Detection System (TEMTADS...California Department of Toxic Substances Control EM61 EM61-MK2 EMI electromagnetic induction ESTCP Environmental Security Technology Certification...SOP Standard Operating Procedure v TEMTADS Time-domain Electromagnetic Multi-sensor Towed Array Detection System man-portable 2x2 TOI target(s

  12. Design of a multisensor data fusion system for target detection

    NASA Astrophysics Data System (ADS)

    Thomopoulos, Stelios C.; Okello, Nickens N.; Kadar, Ivan; Lovas, Louis A.

    1993-09-01

    The objective of this paper is to discuss the issues that are involved in the design of a multisensor fusion system and provide a systematic analysis and synthesis methodology for the design of the fusion system. The system under consideration consists of multifrequency (similar) radar sensors. However, the fusion design must be flexible to accommodate additional dissimilar sensors such as IR, EO, ESM, and Ladar. The motivation for the system design is the proof of the fusion concept for enhancing the detectability of small targets in clutter. In the context of down-selecting the proper configuration for multisensor (similar and dissimilar, and centralized vs. distributed) data fusion, the issues of data modeling, fusion approaches, and fusion architectures need to be addressed for the particular application being considered. Although the study of different approaches may proceed in parallel, the interplay among them is crucial in selecting a fusion configuration for a given application. The natural sequence for addressing the three different issues is to begin from the data modeling, in order to determine the information content of the data. This information will dictate the appropriate fusion approach. This, in turn, will lead to a global fusion architecture. Both distributed and centralized fusion architectures are used to illustrate the design issues along with Monte-Carlo simulation performance comparison of a single sensor versus a multisensor centrally fused system.

  13. Determination of urine ionic composition with potentiometric multisensor system.

    PubMed

    Yaroshenko, Irina; Kirsanov, Dmitry; Kartsova, Lyudmila; Sidorova, Alla; Borisova, Irina; Legin, Andrey

    2015-01-01

    The ionic composition of urine is a good indicator of patient's general condition and allows for diagnostics of certain medical problems such as e.g., urolithiasis. Due to environmental factors and malnutrition the number of registered urinary tract cases continuously increases. Most of the methods currently used for urine analysis are expensive, quite laborious and require skilled personnel. The present work deals with feasibility study of potentiometric multisensor system of 18 ion-selective and cross-sensitive sensors as an analytical tool for determination of urine ionic composition. In total 136 samples from patients of Urolithiasis Laboratory and healthy people were analyzed by the multisensor system as well as by capillary electrophoresis as a reference method. Various chemometric approaches were implemented to relate the data from electrochemical measurements with the reference data. Logistic regression (LR) was applied for classification of samples into healthy and unhealthy producing reasonable misclassification rates. Projection on Latent Structures (PLS) regression was applied for quantitative analysis of ionic composition from potentiometric data. Mean relative errors of simultaneous prediction of sodium, potassium, ammonium, calcium, magnesium, chloride, sulfate, phosphate, urate and creatinine from multisensor system response were in the range 3-13% for independent test sets. This shows a good promise for development of a fast and inexpensive alternative method for urine analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Embry-Riddle Aeronautical University multispectral sensor and data fusion laboratory: a model for distributed research and education

    NASA Astrophysics Data System (ADS)

    McMullen, Sonya A. H.; Henderson, Troy; Ison, David

    2017-05-01

    The miniaturization of unmanned systems and spacecraft, as well as computing and sensor technologies, has opened new opportunities in the areas of remote sensing and multi-sensor data fusion for a variety of applications. Remote sensing and data fusion historically have been the purview of large government organizations, such as the Department of Defense (DoD), National Aeronautics and Space Administration (NASA), and National Geospatial-Intelligence Agency (NGA) due to the high cost and complexity of developing, fielding, and operating such systems. However, miniaturized computers with high capacity processing capabilities, small and affordable sensors, and emerging, commercially available platforms such as UAS and CubeSats to carry such sensors, have allowed for a vast range of novel applications. In order to leverage these developments, Embry-Riddle Aeronautical University (ERAU) has developed an advanced sensor and data fusion laboratory to research component capabilities and their employment on a wide-range of autonomous, robotic, and transportation systems. This lab is unique in several ways, for example, it provides a traditional campus laboratory for students and faculty to model and test sensors in a range of scenarios, process multi-sensor data sets (both simulated and experimental), and analyze results. Moreover, such allows for "virtual" modeling, testing, and teaching capability reaching beyond the physical confines of the facility for use among ERAU Worldwide students and faculty located around the globe. Although other institutions such as Georgia Institute of Technology, Lockheed Martin, University of Dayton, and University of Central Florida have optical sensor laboratories, the ERAU virtual concept is the first such lab to expand to multispectral sensors and data fusion, while focusing on the data collection and data products and not on the manufacturing aspect. Further, the initiative is a unique effort among Embry-Riddle faculty to develop multi-disciplinary, cross-campus research to facilitate faculty- and student-driven research. Specifically, the ERAU Worldwide Campus, with locations across the globe and delivering curricula online, will be leveraged to provide novel approaches to remote sensor experimentation and simulation. The purpose of this paper and presentation is to present this new laboratory, research, education, and collaboration process.

  15. Design-Oriented Enhanced Robotics Curriculum

    ERIC Educational Resources Information Center

    Yilmaz, M.; Ozcelik, S.; Yilmazer, N.; Nekovei, R.

    2013-01-01

    This paper presents an innovative two-course, laboratory-based, and design-oriented robotics educational model. The robotics curriculum exposed senior-level undergraduate students to major robotics concepts, and enhanced the student learning experience in hybrid learning environments by incorporating the IEEE Region-5 annual robotics competition…

  16. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

    PubMed Central

    Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui

    2017-01-01

    Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments. PMID:28216555

  17. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems.

    PubMed

    Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui

    2017-02-14

    Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

  18. Summary of astronaut inputs on automation and robotics for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Weeks, David J.

    1990-01-01

    Astronauts and payload specialists present specific recommendations in the form of an overview that relate to the use of automation and robotics on the Space Station Freedom. The inputs are based on on-orbit operations experience, time requirements for crews, and similar crew-specific knowledge that address the impacts of automation and robotics on productivity. Interview techniques and specific questionnaire results are listed, and the majority of the responses indicate that incorporating automation and robotics to some extent and with human backup can improve productivity. Specific support is found for the use of advanced automation and EVA robotics on the Space Station Freedom and for the use of advanced automation on ground-based stations. Ground-based control of in-flight robotics is required, and Space Station activities and crew tasks should be analyzed to assess the systems engineering approach for incorporating automation and robotics.

  19. State Estimation for Tensegrity Robots

    NASA Technical Reports Server (NTRS)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  20. A switching formation strategy for obstacle avoidance of a multi-robot system based on robot priority model.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2015-05-01

    This paper describes a switching formation strategy for multi-robots with velocity constraints to avoid and cross obstacles. In the strategy, a leader robot plans a safe path using the geometric obstacle avoidance control method (GOACM). By calculating new desired distances and bearing angles with the leader robot, the follower robots switch into a safe formation. With considering collision avoidance, a novel robot priority model, based on the desired distance and bearing angle between the leader and follower robots, is designed during the obstacle avoidance process. The adaptive tracking control algorithm guarantees that the trajectory and velocity tracking errors converge to zero. To demonstrate the validity of the proposed methods, simulation and experiment results present that multi-robots effectively form and switch formation avoiding obstacles without collisions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Direct Aerosol Radiative Forcing Based on Combined A-Train Observations: Towards All-sky Estimates and Attribution to Aerosol Type

    NASA Technical Reports Server (NTRS)

    Redemann, Jens; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.; Burton, S.; Livingston, J.; hide

    2014-01-01

    We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) measurements for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). We discuss some of the challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed. We also discuss a methodology for using the multi-sensor aerosol retrievals for aerosol type classification based on advanced clustering techniques. The combination of research results permits conclusions regarding the attribution of aerosol radiative forcing to aerosol type.

  2. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  3. A real-time automated quality control of rain gauge data based on multiple sensors

    NASA Astrophysics Data System (ADS)

    qi, Y.; Zhang, J.

    2013-12-01

    Precipitation is one of the most important meteorological and hydrological variables. Automated rain gauge networks provide direct measurements of precipitation and have been used for numerous applications such as generating regional and national precipitation maps, calibrating remote sensing data, and validating hydrological and meteorological model predictions. Automated gauge observations are prone to a variety of error sources (instrument malfunction, transmission errors, format changes), and require careful quality controls (QC). Many previous gauge QC techniques were based on neighborhood checks within the gauge network itself and the effectiveness is dependent on gauge densities and precipitation regimes. The current study takes advantage of the multi-sensor data sources in the National Mosaic and Multi-Sensor QPE (NMQ/Q2) system and developes an automated gauge QC scheme based the consistency of radar hourly QPEs and gauge observations. Error characteristics of radar and gauge as a function of the radar sampling geometry, precipitation regimes, and the freezing level height are considered. The new scheme was evaluated by comparing an NMQ national gauge-based precipitation product with independent manual gauge observations. Twelve heavy rainfall events from different seasons and areas of the United States are selected for the evaluation, and the results show that the new NMQ product with QC'ed gauges has a more physically spatial distribution than the old product. And the new product agrees much better statistically with the independent gauges.

  4. Controlling Tensegrity Robots through Evolution using Friction based Actuation

    NASA Technical Reports Server (NTRS)

    Kothapalli, Tejasvi; Agogino, Adrian K.

    2017-01-01

    Traditional robotic structures have limitations in planetary exploration as their rigid structural joints are prone to damage in new and rough terrains. In contrast, robots based on tensegrity structures, composed of rods and tensile cables, offer a highly robust, lightweight, and energy efficient solution over traditional robots. In addition tensegrity robots can be highly configurable by rearranging their topology of rods, cables and motors. However, these highly configurable tensegrity robots pose a significant challenge for locomotion due to their complexity. This study investigates a control pattern for successful locomotion in tensegrity robots through an evolutionary algorithm. A twelve-rod hardware model is rapidly prototyped to utilize a new actuation method based on friction. A web-based physics simulation is created to model the twelve-rod tensegrity ball structure. Square-waves are used as control policies for the actuators of the tensegrity structure. Monte Carlo trials are run to find the most successful number of amplitudes for the square-wave control policy. From the results, an evolutionary algorithm is implemented to find the most optimized solution for locomotion of the twelve-rod tensegrity structure. The software pattern coupled with the new friction based actuation method can serve as the basis for highly efficient tensegrity robots in space exploration.

  5. micROS: a morphable, intelligent and collective robot operating system.

    PubMed

    Yang, Xuejun; Dai, Huadong; Yi, Xiaodong; Wang, Yanzhen; Yang, Shaowu; Zhang, Bo; Wang, Zhiyuan; Zhou, Yun; Peng, Xuefeng

    2016-01-01

    Robots are developing in much the same way that personal computers did 40 years ago, and robot operating system is the critical basis. Current robot software is mainly designed for individual robots. We present in this paper the design of micROS, a morphable, intelligent and collective robot operating system for future collective and collaborative robots. We first present the architecture of micROS, including the distributed architecture for collective robot system as a whole and the layered architecture for every single node. We then present the design of autonomous behavior management based on the observe-orient-decide-act cognitive behavior model and the design of collective intelligence including collective perception, collective cognition, collective game and collective dynamics. We also give the design of morphable resource management, which first categorizes robot resources into physical, information, cognitive and social domains, and then achieve morphability based on self-adaptive software technology. We finally deploy micROS on NuBot football robots and achieve significant improvement in real-time performance.

  6. Improvement of the insertion axis for cochlear implantation with a robot-based system.

    PubMed

    Torres, Renato; Kazmitcheff, Guillaume; De Seta, Daniele; Ferrary, Evelyne; Sterkers, Olivier; Nguyen, Yann

    2017-02-01

    It has previously reported that alignment of the insertion axis along the basal turn of the cochlea was depending on surgeon' experience. In this experimental study, we assessed technological assistances, such as navigation or a robot-based system, to improve the insertion axis during cochlear implantation. A preoperative cone beam CT and a mastoidectomy with a posterior tympanotomy were performed on four temporal bones. The optimal insertion axis was defined as the closest axis to the scala tympani centerline avoiding the facial nerve. A neuronavigation system, a robot assistance prototype, and software allowing a semi-automated alignment of the robot were used to align an insertion tool with an optimal insertion axis. Four procedures were performed and repeated three times in each temporal bone: manual, manual navigation-assisted, robot-based navigation-assisted, and robot-based semi-automated. The angle between the optimal and the insertion tool axis was measured in the four procedures. The error was 8.3° ± 2.82° for the manual procedure (n = 24), 8.6° ± 2.83° for the manual navigation-assisted procedure (n = 24), 5.4° ± 3.91° for the robot-based navigation-assisted procedure (n = 24), and 3.4° ± 1.56° for the robot-based semi-automated procedure (n = 12). A higher accuracy was observed with the semi-automated robot-based technique than manual and manual navigation-assisted (p < 0.01). Combination of a navigation system and a manual insertion does not improve the alignment accuracy due to the lack of friendly user interface. On the contrary, a semi-automated robot-based system reduces both the error and the variability of the alignment with a defined optimal axis.

  7. Implementation of a Multi-Robot Coverage Algorithm on a Two-Dimensional, Grid-Based Environment

    DTIC Science & Technology

    2017-06-01

    two planar laser range finders with a 180-degree field of view , color camera, vision beacons, and wireless communicator. In their system, the robots...Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF A MULTI -ROBOT COVERAGE ALGORITHM ON A TWO -DIMENSIONAL, GRID-BASED ENVIRONMENT 5. FUNDING NUMBERS...path planning coverage algorithm for a multi -robot system in a two -dimensional, grid-based environment. We assess the applicability of a topology

  8. Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI.

    PubMed

    Chen, Xiaogang; Zhao, Bing; Wang, Yijun; Xu, Shengpu; Gao, Xiaorong

    2018-04-12

    Although robot technology has been successfully used to empower people who suffer from motor disabilities to increase their interaction with their physical environment, it remains a challenge for individuals with severe motor impairment, who do not have the motor control ability to move robots or prosthetic devices by manual control. In this study, to mitigate this issue, a noninvasive brain-computer interface (BCI)-based robotic arm control system using gaze based steady-state visual evoked potential (SSVEP) was designed and implemented using a portable wireless electroencephalogram (EEG) system. A 15-target SSVEP-based BCI using a filter bank canonical correlation analysis (FBCCA) method allowed users to directly control the robotic arm without system calibration. The online results from 12 healthy subjects indicated that a command for the proposed brain-controlled robot system could be selected from 15 possible choices in 4[Formula: see text]s (i.e. 2[Formula: see text]s for visual stimulation and 2[Formula: see text]s for gaze shifting) with an average accuracy of 92.78%, resulting in a 15 commands/min transfer rate. Furthermore, all subjects (even naive users) were able to successfully complete the entire move-grasp-lift task without user training. These results demonstrated an SSVEP-based BCI could provide accurate and efficient high-level control of a robotic arm, showing the feasibility of a BCI-based robotic arm control system for hand-assistance.

  9. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  10. Adaptive model-based assistive control for pneumatic direct driven soft rehabilitation robots.

    PubMed

    Wilkening, Andre; Ivlev, Oleg

    2013-06-01

    Assistive behavior and inherent compliance are assumed to be the essential properties for effective robot-assisted therapy in neurological as well as in orthopedic rehabilitation. This paper presents two adaptive model-based assistive controllers for pneumatic direct driven soft rehabilitation robots that are based on separated models of the soft-robot and the patient's extremity, in order to take into account the individual patient's behavior, effort and ability during control, what is assumed to be essential to relearn lost motor functions in neurological and facilitate muscle reconstruction in orthopedic rehabilitation. The high inherent compliance of soft-actuators allows for a general human-robot interaction and provides the base for effective and dependable assistive control. An inverse model of the soft-robot with estimated parameters is used to achieve robot transparency during treatment and inverse adaptive models of the individual patient's extremity allow the controllers to learn on-line the individual patient's behavior and effort and react in a way that assist the patient only as much as needed. The effectiveness of the controllers is evaluated with unimpaired subjects using a first prototype of a soft-robot for elbow training. Advantages and disadvantages of both controllers are analyzed and discussed.

  11. A visual servo-based teleoperation robot system for closed diaphyseal fracture reduction.

    PubMed

    Li, Changsheng; Wang, Tianmiao; Hu, Lei; Zhang, Lihai; Du, Hailong; Zhao, Lu; Wang, Lifeng; Tang, Peifu

    2015-09-01

    Common fracture treatments include open reduction and intramedullary nailing technology. However, these methods have disadvantages such as intraoperative X-ray radiation, delayed union or nonunion and postoperative rotation. Robots provide a novel solution to the aforementioned problems while posing new challenges. Against this scientific background, we develop a visual servo-based teleoperation robot system. In this article, we present a robot system, analyze the visual servo-based control system in detail and develop path planning for fracture reduction, inverse kinematics, and output forces of the reduction mechanism. A series of experimental tests is conducted on a bone model and an animal bone. The experimental results demonstrate the feasibility of the robot system. The robot system uses preoperative computed tomography data to realize high precision and perform minimally invasive teleoperation for fracture reduction via the visual servo-based control system while protecting surgeons from radiation. © IMechE 2015.

  12. Multisensor System for Isotemporal Measurements to Assess Indoor Climatic Conditions in Poultry Farms

    PubMed Central

    Bustamante, Eliseo; Guijarro, Enrique; García-Diego, Fernando-Juan; Balasch, Sebastián; Hospitaler, Antonio; Torres, Antonio G.

    2012-01-01

    The rearing of poultry for meat production (broilers) is an agricultural food industry with high relevance to the economy and development of some countries. Periodic episodes of extreme climatic conditions during the summer season can cause high mortality among birds, resulting in economic losses. In this context, ventilation systems within poultry houses play a critical role to ensure appropriate indoor climatic conditions. The objective of this study was to develop a multisensor system to evaluate the design of the ventilation system in broiler houses. A measurement system equipped with three types of sensors: air velocity, temperature and differential pressure was designed and built. The system consisted in a laptop, a data acquisition card, a multiplexor module and a set of 24 air temperature, 24 air velocity and two differential pressure sensors. The system was able to acquire up to a maximum of 128 signals simultaneously at 5 second intervals. The multisensor system was calibrated under laboratory conditions and it was then tested in field tests. Field tests were conducted in a commercial broiler farm under four different pressure and ventilation scenarios in two sections within the building. The calibration curves obtained under laboratory conditions showed similar regression coefficients among temperature, air velocity and pressure sensors and a high goodness fit (R2 = 0.99) with the reference. Under field test conditions, the multisensor system showed a high number of input signals from different locations with minimum internal delay in acquiring signals. The variation among air velocity sensors was not significant. The developed multisensor system was able to integrate calibrated sensors of temperature, air velocity and differential pressure and operated succesfully under different conditions in a mechanically-ventilated broiler farm. This system can be used to obtain quasi-instantaneous fields of the air velocity and temperature, as well as differential pressure maps to assess the design and functioning of ventilation system and as a verification and validation (V&V) system of Computational Fluid Dynamics (CFD) simulations in poultry farms. PMID:22778611

  13. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    DTIC Science & Technology

    2015-08-14

    theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The...independently evolving research directions based on physics-based models of mechanical, electromechanical and electronic devices, operational constraints

  14. Micro-aerial vehicle type wall-climbing robot mechanism for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Shin, Jae-Uk; Kim, Donghoon; Kim, Jong-Heon; Myung, Hyun

    2013-04-01

    Currently, the maintenance or inspection of large structures is labor-intensive, so it has a problem of the large cost due to the staffing professionals and the risk for hard to reach areas. To solve the problem, the needs of wall-climbing robot are emerged. Infra-based wall-climbing robots to maintain an outer wall of building have high payload and safety. However, the infrastructure for the robot must be equipped on the target structure and the infrastructure isn't preferred by the architects since it can injure the exterior of the structure. These are the reasons of why the infra-based wall-climbing robot is avoided. In case of the non-infra-based wall-climbing robot, it is researched to overcome the aforementioned problems. However, most of the technologies are in the laboratory level since the payload, safety and maneuverability are not satisfactory. For this reason, aerial vehicle type wall-climbing robot is researched. It is a flying possible wallclimbing robot based on a quadrotor. It is a famous aerial vehicle robot using four rotors to make a thrust for flying. This wall-climbing robot can stick to a vertical wall using the thrust. After sticking to the wall, it can move with four wheels installed on the robot. As a result, it has high maneuverability and safety since it can restore the position to the wall even if it is detached from the wall by unexpected disturbance while climbing the wall. The feasibility of the main concept was verified through simulations and experiments using a prototype.

  15. Event-Based Control Strategy for Mobile Robots in Wireless Environments.

    PubMed

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-12-02

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.

  16. Event-Based Control Strategy for Mobile Robots in Wireless Environments

    PubMed Central

    Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto

    2015-01-01

    In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412

  17. Navigating the pathway to robotic competency in general thoracic surgery.

    PubMed

    Seder, Christopher W; Cassivi, Stephen D; Wigle, Dennis A

    2013-01-01

    Although robotic technology has addressed many of the limitations of traditional videoscopic surgery, robotic surgery has not gained widespread acceptance in the general thoracic community. We report our initial robotic surgery experience and propose a structured, competency-based pathway for the development of robotic skills. Between December 2008 and February 2012, a total of 79 robot-assisted pulmonary, mediastinal, benign esophageal, or diaphragmatic procedures were performed. Data on patient characteristics and perioperative outcomes were retrospectively collected and analyzed. During the study period, one surgeon and three residents participated in a triphasic, competency-based pathway designed to teach robotic skills. The pathway consisted of individual preclinical learning followed by mentored preclinical exercises and progressive clinical responsibility. The robot-assisted procedures performed included lung resection (n = 38), mediastinal mass resection (n = 19), hiatal or paraesophageal hernia repair (n = 12), and Heller myotomy (n = 7), among others (n = 3). There were no perioperative mortalities, with a 20% complication rate and a 3% readmission rate. Conversion to a thoracoscopic or open approach was required in eight pulmonary resections to facilitate dissection (six) or to control hemorrhage (two). Fewer major perioperative complications were observed in the later half of the experience. All residents who participated in the thoracic surgery robotic pathway perform robot-assisted procedures as part of their clinical practice. Robot-assisted thoracic surgery can be safely learned when skill acquisition is guided by a structured, competency-based pathway.

  18. Space robotics in Japan

    NASA Technical Reports Server (NTRS)

    Whittaker, William; Lowrie, James W.; Mccain, Harry; Bejczy, Antal; Sheridan, Tom; Kanade, Takeo; Allen, Peter

    1994-01-01

    Japan has been one of the most successful countries in the world in the realm of terrestrial robot applications. The panel found that Japan has in place a broad base of robotics research and development, ranging from components to working systems for manufacturing, construction, and human service industries. From this base, Japan looks to the use of robotics in space applications and has funded work in space robotics since the mid-1980's. The Japanese are focusing on a clear image of what they hope to achieve through three objectives for the 1990's: developing long-reach manipulation for tending experiments on Space Station Freedom, capturing satellites using a free-flying manipulator, and surveying part of the moon with a mobile robot. This focus and a sound robotics infrastructure is enabling the young Japanese space program to develop relevant systems for extraterrestrial robotics applications.

  19. Controlling the autonomy of a reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David

    2004-09-01

    In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.

  20. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    PubMed Central

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  1. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    PubMed

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  2. Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints

    PubMed Central

    Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.

    2015-01-01

    Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575

  3. Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter.

    PubMed

    Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei

    2016-11-02

    Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system's error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts.

  4. Multi-Sensor Fusion with Interaction Multiple Model and Chi-Square Test Tolerant Filter

    PubMed Central

    Yang, Chun; Mohammadi, Arash; Chen, Qing-Wei

    2016-01-01

    Motivated by the key importance of multi-sensor information fusion algorithms in the state-of-the-art integrated navigation systems due to recent advancements in sensor technologies, telecommunication, and navigation systems, the paper proposes an improved and innovative fault-tolerant fusion framework. An integrated navigation system is considered consisting of four sensory sub-systems, i.e., Strap-down Inertial Navigation System (SINS), Global Navigation System (GPS), the Bei-Dou2 (BD2) and Celestial Navigation System (CNS) navigation sensors. In such multi-sensor applications, on the one hand, the design of an efficient fusion methodology is extremely constrained specially when no information regarding the system’s error characteristics is available. On the other hand, the development of an accurate fault detection and integrity monitoring solution is both challenging and critical. The paper addresses the sensitivity issues of conventional fault detection solutions and the unavailability of a precisely known system model by jointly designing fault detection and information fusion algorithms. In particular, by using ideas from Interacting Multiple Model (IMM) filters, the uncertainty of the system will be adjusted adaptively by model probabilities and using the proposed fuzzy-based fusion framework. The paper also addresses the problem of using corrupted measurements for fault detection purposes by designing a two state propagator chi-square test jointly with the fusion algorithm. Two IMM predictors, running in parallel, are used and alternatively reactivated based on the received information form the fusion filter to increase the reliability and accuracy of the proposed detection solution. With the combination of the IMM and the proposed fusion method, we increase the failure sensitivity of the detection system and, thereby, significantly increase the overall reliability and accuracy of the integrated navigation system. Simulation results indicate that the proposed fault tolerant fusion framework provides superior performance over its traditional counterparts. PMID:27827832

  5. Measuring short-term post-fire forest recovery across a burn severity gradient in a mixed pine-oak forest using multi-sensor remote sensing techniques

    DOE PAGES

    Meng, Ran; Wu, Jin; Zhao, Feng; ...

    2018-06-01

    Understanding post-fire forest recovery is pivotal to the study of forest dynamics and global carbon cycle. Field-based studies indicated a convex response of forest recovery rate to burn severity at the individual tree level, related with fire-induced tree mortality; however, these findings were constrained in spatial/temporal extents, while not detectable by traditional optical remote sensing studies, largely attributing to the contaminated effect from understory recovery. For this work, we examined whether the combined use of multi-sensor remote sensing techniques (i.e., 1m simultaneous airborne imaging spectroscopy and LiDAR and 2m satellite multi-spectral imagery) to separate canopy recovery from understory recovery wouldmore » enable to quantify post-fire forest recovery rate spanning a large gradient in burn severity over large-scales. Our study was conducted in a mixed pine-oak forest in Long Island, NY, three years after a top-killing fire. Our studies remotely detected an initial increase and then decline of forest recovery rate to burn severity across the burned area, with a maximum canopy area-based recovery rate of 10% per year at moderate forest burn severity class. More intriguingly, such remotely detected convex relationships also held at species level, with pine trees being more resilient to high burn severity and having a higher maximum recovery rate (12% per year) than oak trees (4% per year). These results are one of the first quantitative evidences showing the effects of fire adaptive strategies on post-fire forest recovery, derived from relatively large spatial-temporal domains. Our study thus provides the methodological advance to link multi-sensor remote sensing techniques to monitor forest dynamics in a spatially explicit manner over large-scales, with important implications for fire-related forest management, and for constraining/benchmarking fire effect schemes in ecological process models.« less

  6. Measuring short-term post-fire forest recovery across a burn severity gradient in a mixed pine-oak forest using multi-sensor remote sensing techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Ran; Wu, Jin; Zhao, Feng

    Understanding post-fire forest recovery is pivotal to the study of forest dynamics and global carbon cycle. Field-based studies indicated a convex response of forest recovery rate to burn severity at the individual tree level, related with fire-induced tree mortality; however, these findings were constrained in spatial/temporal extents, while not detectable by traditional optical remote sensing studies, largely attributing to the contaminated effect from understory recovery. For this work, we examined whether the combined use of multi-sensor remote sensing techniques (i.e., 1m simultaneous airborne imaging spectroscopy and LiDAR and 2m satellite multi-spectral imagery) to separate canopy recovery from understory recovery wouldmore » enable to quantify post-fire forest recovery rate spanning a large gradient in burn severity over large-scales. Our study was conducted in a mixed pine-oak forest in Long Island, NY, three years after a top-killing fire. Our studies remotely detected an initial increase and then decline of forest recovery rate to burn severity across the burned area, with a maximum canopy area-based recovery rate of 10% per year at moderate forest burn severity class. More intriguingly, such remotely detected convex relationships also held at species level, with pine trees being more resilient to high burn severity and having a higher maximum recovery rate (12% per year) than oak trees (4% per year). These results are one of the first quantitative evidences showing the effects of fire adaptive strategies on post-fire forest recovery, derived from relatively large spatial-temporal domains. Our study thus provides the methodological advance to link multi-sensor remote sensing techniques to monitor forest dynamics in a spatially explicit manner over large-scales, with important implications for fire-related forest management, and for constraining/benchmarking fire effect schemes in ecological process models.« less

  7. Time-Of-Flight Camera, Optical Tracker and Computed Tomography in Pairwise Data Registration

    PubMed Central

    Badura, Pawel; Juszczyk, Jan; Pietka, Ewa

    2016-01-01

    Purpose A growing number of medical applications, including minimal invasive surgery, depends on multi-modal or multi-sensors data processing. Fast and accurate 3D scene analysis, comprising data registration, seems to be crucial for the development of computer aided diagnosis and therapy. The advancement of surface tracking system based on optical trackers already plays an important role in surgical procedures planning. However, new modalities, like the time-of-flight (ToF) sensors, widely explored in non-medical fields are powerful and have the potential to become a part of computer aided surgery set-up. Connection of different acquisition systems promises to provide a valuable support for operating room procedures. Therefore, the detailed analysis of the accuracy of such multi-sensors positioning systems is needed. Methods We present the system combining pre-operative CT series with intra-operative ToF-sensor and optical tracker point clouds. The methodology contains: optical sensor set-up and the ToF-camera calibration procedures, data pre-processing algorithms, and registration technique. The data pre-processing yields a surface, in case of CT, and point clouds for ToF-sensor and marker-driven optical tracker representation of an object of interest. An applied registration technique is based on Iterative Closest Point algorithm. Results The experiments validate the registration of each pair of modalities/sensors involving phantoms of four various human organs in terms of Hausdorff distance and mean absolute distance metrics. The best surface alignment was obtained for CT and optical tracker combination, whereas the worst for experiments involving ToF-camera. Conclusion The obtained accuracies encourage to further develop the multi-sensors systems. The presented substantive discussion concerning the system limitations and possible improvements mainly related to the depth information produced by the ToF-sensor is useful for computer aided surgery developers. PMID:27434396

  8. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi bodymore » system model and its included compensation method.« less

  9. Robotic Transnasal Endoscopic Skull Base Surgery: Systematic Review of the Literature and Report of a Novel Prototype for a Hybrid System (Brescia Endoscope Assistant Robotic Holder).

    PubMed

    Bolzoni Villaret, Andrea; Doglietto, Francesco; Carobbio, Andrea; Schreiber, Alberto; Panni, Camilla; Piantoni, Enrico; Guida, Giovanni; Fontanella, Marco Maria; Nicolai, Piero; Cassinis, Riccardo

    2017-09-01

    Although robotics has already been applied to several surgical fields, available systems are not designed for endoscopic skull base surgery (ESBS). New conception prototypes have been recently described for ESBS. The aim of this study was to provide a systematic literature review of robotics for ESBS and describe a novel prototype developed at the University of Brescia. PubMed and Scopus databases were searched using a combination of terms, including Robotics OR Robot and Surgery OR Otolaryngology OR Skull Base OR Holder. The retrieved papers were analyzed, recording the following features: interface, tools under robotic control, force feedback, safety systems, setup time, and operative time. A novel hybrid robotic system has been developed and tested in a preclinical setting at the University of Brescia, using an industrial manipulator and readily available off-the-shelf components. A total of 11 robotic prototypes for ESBS were identified. Almost all prototypes present a difficult emergency management as one of the main limits. The Brescia Endoscope Assistant Robotic holder has proven the feasibility of an intuitive robotic movement, using the surgeon's head position: a 6 degree of freedom sensor was used and 2 light sources were added to glasses that were therefore recognized by a commercially available sensor. Robotic system prototypes designed for ESBS and reported in the literature still present significant technical limitations. Hybrid robot assistance has a huge potential and might soon be feasible in ESBS. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. A Motion Planning Approach to Automatic Obstacle Avoidance during Concentric Tube Robot Teleoperation.

    PubMed

    Torres, Luis G; Kuntz, Alan; Gilbert, Hunter B; Swaney, Philip J; Hendrick, Richard J; Webster, Robert J; Alterovitz, Ron

    2015-05-01

    Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot's shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot's tip. However, the robot's unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot's shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles.

  11. A robotic orbital emulator with lidar-based SLAM and AMCL for multiple entity pose estimation

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Xiang, Xingyu; Jia, Bin; Wang, Zhonghai; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2018-05-01

    This paper revises and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motions. The 3D motion of satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motorcontrolled- ball along a rod (robotic arm), which is attached to the robot. Lidar only measurements are used to estimate the pose information of the multiple robots. SLAM (simultaneous localization and mapping) is running on one robot to generate the map and compute the pose for the robot. Based on the SLAM map maintained by the robot, the other robots run the adaptive Monte Carlo localization (AMCL) method to estimate their poses. The controller is designed to guide the robot to follow a given orbit. The controllability is analyzed by using a feedback linearization method. Experiments are conducted to show the convergence of AMCL and the orbit tracking performance.

  12. Design of the arm-wrestling robot's force acquisition system based on Qt

    NASA Astrophysics Data System (ADS)

    Huo, Zhixiang; Chen, Feng; Wang, Yongtao

    2017-03-01

    As a collection of entertainment and medical rehabilitation in a robot, the research on the arm-wrestling robot is of great significance. In order to achieve the collection of the arm-wrestling robot's force signals, the design and implementation of arm-wrestling robot's force acquisition system is introduced in this paper. The system is based on MP4221 data acquisition card and is programmed by Qt. It runs successfully in collecting the analog signals on PC. The interface of the system is simple and the real-time performance is good. The result of the test shows the feasibility in arm-wrestling robot.

  13. Unified Approach To Control Of Motions Of Mobile Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Improved computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Present scheme similar to one described in "Coordinated Control of Mobile Robotic Manipulators" (NPO-19109). Both schemes based on configuration-control formalism. Present one incorporates explicit distinction between holonomic and nonholonomic constraints. Several other prior articles in NASA Tech Briefs discussed aspects of configuration-control formalism. These include "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes with Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).

  14. A mobile robots experimental environment with event-based wireless communication.

    PubMed

    Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián

    2013-07-22

    An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented.

  15. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots

    PubMed Central

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-01-01

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed. PMID:29186843

  16. The Relationship between Robot's Nonverbal Behaviour and Human's Likability Based on Human's Personality.

    PubMed

    Thepsoonthorn, Chidchanok; Ogawa, Ken-Ichiro; Miyake, Yoshihiro

    2018-05-30

    At current state, although robotics technology has been immensely developed, the uncertainty to completely engage in human-robot interaction is still growing among people. Many current studies then started to concern about human factors that might influence human's likability like human's personality, and found that compatibility between human's and robot's personality (expressions of personality characteristics) can enhance human's likability. However, it is still unclear whether specific means and strategy of robot's nonverbal behaviours enhances likability from human with different personality traits and whether there is a relationship between robot's nonverbal behaviours and human's likability based on human's personality. In this study, we investigated and focused on the interaction via gaze and head nodding behaviours (mutual gaze convergence and head nodding synchrony) between introvert/extravert participants and robot in two communication strategies (Backchanneling and Turn-taking). Our findings reveal that the introvert participants are positively affected by backchanneling in robot's head nodding behaviour, which results in substantial head nodding synchrony whereas the extravert participants are positively influenced by turn-taking in gaze behaviour, which leads to significant mutual gaze convergence. This study demonstrates that there is a relationship between robot's nonverbal behaviour and human's likability based on human's personality.

  17. Model learning for robot control: a survey.

    PubMed

    Nguyen-Tuong, Duy; Peters, Jan

    2011-11-01

    Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot's own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.

  18. A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots.

    PubMed

    Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im

    2017-11-25

    Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.

  19. The Academic Differences between Students Involved in School-Based Robotics Programs and Students Not Involved in School-Based Robotics Programs

    ERIC Educational Resources Information Center

    Koumoullos, Michael

    2013-01-01

    This research study aimed to identify any correlation between participation in afterschool robotics at the high school level and academic performance. Through a sample of N = 121 students, the researcher examined the grades and attendance of students who participated in a robotics program in the 2011-2012 school year. The academic record of these…

  20. IMU-based online kinematic calibration of robot manipulator.

    PubMed

    Du, Guanglong; Zhang, Ping

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.

  1. A learning-based semi-autonomous controller for robotic exploration of unknown disaster scenes while searching for victims.

    PubMed

    Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie

    2014-12-01

    Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.

  2. Construction typification as the tool for optimizing the functioning of a robotized manufacturing system

    NASA Astrophysics Data System (ADS)

    Gwiazda, A.; Banas, W.; Sekala, A.; Foit, K.; Hryniewicz, P.; Kost, G.

    2015-11-01

    Process of workcell designing is limited by different constructional requirements. They are related to technological parameters of manufactured element, to specifications of purchased elements of a workcell and to technical characteristics of a workcell scene. This shows the complexity of the design-constructional process itself. The results of such approach are individually designed workcell suitable to the specific location and specific production cycle. Changing this parameters one must rebuild the whole configuration of a workcell. Taking into consideration this it is important to elaborate the base of typical elements of a robot kinematic chain that could be used as the tool for building Virtual modelling of kinematic chains of industrial robots requires several preparatory phase. Firstly, it is important to create a database element, which will be models of industrial robot arms. These models could be described as functional primitives that represent elements between components of the kinematic pairs and structural members of industrial robots. A database with following elements is created: the base kinematic pairs, the base robot structural elements, the base of the robot work scenes. The first of these databases includes kinematic pairs being the key component of the manipulator actuator modules. Accordingly, as mentioned previously, it includes the first stage rotary pair of fifth stage. This type of kinematic pairs was chosen due to the fact that it occurs most frequently in the structures of industrial robots. Second base consists of structural robot elements therefore it allows for the conversion of schematic structures of kinematic chains in the structural elements of the arm of industrial robots. It contains, inter alia, the structural elements such as base, stiff members - simple or angular units. They allow converting recorded schematic three-dimensional elements. Last database is a database of scenes. It includes elements of both simple and complex: simple models of technological equipment, conveyors models, models of the obstacles and like that. Using these elements it could be formed various production spaces (robotized workcells), in which it is possible to virtually track the operation of an industrial robot arm modelled in the system.

  3. The academic differences between students involved in school-based robotics programs and students not involved in school-based robotics programs

    NASA Astrophysics Data System (ADS)

    Koumoullos, Michael

    This research study aimed to identify any correlation between participation in afterschool robotics at the high school level and academic performance. Through a sample of N=121 students, the researcher examined the grades and attendance of students who participated in a robotics program in the 2011-2012 school year. The academic record of these students was compared to a group of students who were members of school based sports teams and to a group of students who were not part of either of the first two groups. Academic record was defined as overall GPA, English grade, mathematics grade, mathematics-based standardized state exam scores, and attendance rates. All of the participants of this study were students in a large, urban career and technical education high school. As STEM (Science, Technology, Engineering, and Mathematics) has come to the forefront of educational focus, robotics programs have grown in quantity. Starting robotics programs requires a serious commitment of time, money, and other resources. The benefits of such programs have not been well analyzed. This research study had three major goals: to identify the academic characteristics of students who are drawn to robotics programs, to identify the academic impact of the robotics program during the robotics season, and to identify the academic impact of the robotics program at the end of the school year. The study was a non-experiment. The researchers ran MANOVS, repeated measures analyses, an ANOVA, and descriptive statistics to analyze the data. The data showed that students drawn to robotics were academically stronger than students who did not participate in robotics. The data also showed that grades and attendance did not significantly improve or degrade either during the robotics season or at year-end. These findings are significant because they show that robotics programs attract students who are academically strong. This information can be very useful in high school articulation programs. These findings also show that robotics programs can be an educational activity for academically strong students. Further, they show that participation in such programs does not distract students from their academic focus.

  4. Memristive device based learning for navigation in robots.

    PubMed

    Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A

    2017-11-08

    Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.

  5. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    ERIC Educational Resources Information Center

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  6. Advanced Integrated Multi-sensor Surveillance (AIMS). Mission, Function, Task Analysis

    DTIC Science & Technology

    2007-06-01

    flaps, elevators and rudder control surfaces are based on conventional mechanical systems, using dual hydraulic boosters. Trim tabs are provided for... dumping the solid waste overboard it is difficult to determine its source. When an oil slick has been detected, the crew attempts to discover the...NAVCOM advises helicopter of on-scene weather, elevation, flight conditions and salient terrain features which may impact hoisting requirements

  7. Study on Parameter Identification of Assembly Robot based on Screw Theory

    NASA Astrophysics Data System (ADS)

    Yun, Shi; Xiaodong, Zhang

    2017-11-01

    The kinematic model of assembly robot is one of the most important factors affecting repetitive precision. In order to improve the accuracy of model positioning, this paper first establishes the exponential product model of ER16-1600 assembly robot on the basis of screw theory, and then based on iterative least squares method, using ER16-1600 model robot parameter identification. By comparing the experiment before and after the calibration, it is proved that the method has obvious improvement on the positioning accuracy of the assembly robot.

  8. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  9. i-SAIRAS '90; Proceedings of the International Symposium on Artificial Intelligence, Robotics and Automation in Space, Kobe, Japan, Nov. 18-20, 1990

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The present conference on artificial intelligence (AI), robotics, and automation in space encompasses robot systems, lunar and planetary robots, advanced processing, expert systems, knowledge bases, issues of operation and management, manipulator control, and on-orbit service. Specific issues addressed include fundamental research in AI at NASA, the FTS dexterous telerobot, a target-capture experiment by a free-flying robot, the NASA Planetary Rover Program, the Katydid system for compiling KEE applications to Ada, and speech recognition for robots. Also addressed are a knowledge base for real-time diagnosis, a pilot-in-the-loop simulation of an orbital docking maneuver, intelligent perturbation algorithms for space scheduling optimization, a fuzzy control method for a space manipulator system, hyperredundant manipulator applications, robotic servicing of EOS instruments, and a summary of astronaut inputs on automation and robotics for the Space Station Freedom.

  10. Cooperative Environment Scans Based on a Multi-Robot System

    PubMed Central

    Kwon, Ji-Wook

    2015-01-01

    This paper proposes a cooperative environment scan system (CESS) using multiple robots, where each robot has low-cost range finders and low processing power. To organize and maintain the CESS, a base robot monitors the positions of the child robots, controls them, and builds a map of the unknown environment, while the child robots with low performance range finders provide obstacle information. Even though each child robot provides approximated and limited information of the obstacles, CESS replaces the single LRF, which has a high cost, because much of the information is acquired and accumulated by a number of the child robots. Moreover, the proposed CESS extends the measurement boundaries and detects obstacles hidden behind others. To show the performance of the proposed system and compare this with the numerical models of the commercialized 2D and 3D laser scanners, simulation results are included. PMID:25789491

  11. Knowledge based systems for intelligent robotics

    NASA Technical Reports Server (NTRS)

    Rajaram, N. S.

    1982-01-01

    It is pointed out that the construction of large space platforms, such as space stations, has to be carried out in the outer space environment. As it is extremely expensive to support human workers in space for large periods, the only feasible solution appears to be related to the development and deployment of highly capable robots for most of the tasks. Robots for space applications will have to possess characteristics which are very different from those needed by robots in industry. The present investigation is concerned with the needs of space robotics and the technologies which can be of assistance to meet these needs, giving particular attention to knowledge bases. 'Intelligent' robots are required for the solution of arising problems. The collection of facts and rules needed for accomplishing such solutions form the 'knowledge base' of the system.

  12. Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, W.J.; Chun, W.H.

    1990-01-01

    The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less

  13. Autonomous Motion Learning for Intra-Vehicular Activity Space Robot

    NASA Astrophysics Data System (ADS)

    Watanabe, Yutaka; Yairi, Takehisa; Machida, Kazuo

    Space robots will be needed in the future space missions. So far, many types of space robots have been developed, but in particular, Intra-Vehicular Activity (IVA) space robots that support human activities should be developed to reduce human-risks in space. In this paper, we study the motion learning method of an IVA space robot with the multi-link mechanism. The advantage point is that this space robot moves using reaction force of the multi-link mechanism and contact forces from the wall as space walking of an astronaut, not to use a propulsion. The control approach is determined based on a reinforcement learning with the actor-critic algorithm. We demonstrate to clear effectiveness of this approach using a 5-link space robot model by simulation. First, we simulate that a space robot learn the motion control including contact phase in two dimensional case. Next, we simulate that a space robot learn the motion control changing base attitude in three dimensional case.

  14. A CLIPS-based expert system for the evaluation and selection of robots

    NASA Technical Reports Server (NTRS)

    Nour, Mohamed A.; Offodile, Felix O.; Madey, Gregory R.

    1994-01-01

    This paper describes the development of a prototype expert system for intelligent selection of robots for manufacturing operations. The paper first develops a comprehensive, three-stage process to model the robot selection problem. The decisions involved in this model easily lend themselves to an expert system application. A rule-based system, based on the selection model, is developed using the CLIPS expert system shell. Data about actual robots is used to test the performance of the prototype system. Further extensions to the rule-based system for data handling and interfacing capabilities are suggested.

  15. Bio-robots automatic navigation with graded electric reward stimulation based on Reinforcement Learning.

    PubMed

    Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.

  16. Method and apparatus for automatic control of a humanoid robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  17. A Demonstrator Intelligent Scheduler For Sensor-Based Robots

    NASA Astrophysics Data System (ADS)

    Perrotta, Gabriella; Allen, Charles R.; Shepherd, Andrew J.

    1987-10-01

    The development of an execution module capable of functioning as as on-line supervisor for a robot equipped with a vision sensor and tactile sensing gripper system is described. The on-line module is supported by two off-line software modules which provide a procedural based assembly constraints language to allow the assembly task to be defined. This input is then converted into a normalised and minimised form. The host Robot programming language permits high level motions to be issued at the to level, hence allowing a low programming overhead to the designer, who must describe the assembly sequence. Components are selected for pick and place robot movement, based on information derived from two cameras, one static and the other mounted on the end effector of the robot. The approach taken is multi-path scheduling as described by Fox pi. The system is seen to permit robot assembly in a less constrained parts presentation environment making full use of the sensory detail available on the robot.

  18. The Three Laws of Neurorobotics: A Review on What Neurorehabilitation Robots Should Do for Patients and Clinicians.

    PubMed

    Iosa, Marco; Morone, Giovanni; Cherubini, Andrea; Paolucci, Stefano

    Most studies and reviews on robots for neurorehabilitation focus on their effectiveness. These studies often report inconsistent results. This and many other reasons limit the credit given to these robots by therapists and patients. Further, neurorehabilitation is often still based on therapists' expertise, with competition among different schools of thought, generating substantial uncertainty about what exactly a neurorehabilitation robot should do. Little attention has been given to ethics. This review adopts a new approach, inspired by Asimov's three laws of robotics and based on the most recent studies in neurorobotics, for proposing new guidelines for designing and using robots for neurorehabilitation. We propose three laws of neurorobotics based on the ethical need for safe and effective robots, the redefinition of their role as therapist helpers, and the need for clear and transparent human-machine interfaces. These laws may allow engineers and clinicians to work closely together on a new generation of neurorobots.

  19. Progress in EEG-Based Brain Robot Interaction Systems

    PubMed Central

    Li, Mengfan; Niu, Linwei; Xian, Bin; Zeng, Ming; Chen, Genshe

    2017-01-01

    The most popular noninvasive Brain Robot Interaction (BRI) technology uses the electroencephalogram- (EEG-) based Brain Computer Interface (BCI), to serve as an additional communication channel, for robot control via brainwaves. This technology is promising for elderly or disabled patient assistance with daily life. The key issue of a BRI system is to identify human mental activities, by decoding brainwaves, acquired with an EEG device. Compared with other BCI applications, such as word speller, the development of these applications may be more challenging since control of robot systems via brainwaves must consider surrounding environment feedback in real-time, robot mechanical kinematics, and dynamics, as well as robot control architecture and behavior. This article reviews the major techniques needed for developing BRI systems. In this review article, we first briefly introduce the background and development of mind-controlled robot technologies. Second, we discuss the EEG-based brain signal models with respect to generating principles, evoking mechanisms, and experimental paradigms. Subsequently, we review in detail commonly used methods for decoding brain signals, namely, preprocessing, feature extraction, and feature classification, and summarize several typical application examples. Next, we describe a few BRI applications, including wheelchairs, manipulators, drones, and humanoid robots with respect to synchronous and asynchronous BCI-based techniques. Finally, we address some existing problems and challenges with future BRI techniques. PMID:28484488

  20. Differences in self-reported outcomes of open prostatectomy patients and robotic prostatectomy patients in an international web-based survey.

    PubMed

    O'Shaughnessy, Peter Kevin; Laws, Thomas A; Pinnock, Carol; Moul, Judd W; Esterman, Adrian

    2013-12-01

    To compare patient reported outcomes between robotic assisted surgery and non-robotic assisted surgery. This was an international web-based survey based on a qualitative research and literature review, an internet-based questionnaire was developed with approximately 70 items. The questionnaire included both closed and open-ended questions. Responses were received from 193 men of whom 86 had received either open (OP) or robotic (RALP) surgery. A statistically significant (p=0.027), ranked analysis of covariance was found demonstrating higher recent distress in the robotic (RALP) surgery group. Although not statistically significant, there was a pattern of men having robotic (RALP) surgery reporting fewer urinary and bowel problems, but having a greater rate of sexual dysfunction. Men who opt for robotic surgery may have higher expectations for robotic (RALP) surgery, when these expectations are not fully met they may be less likely to accept the consequences of this major cancer surgery. Information regarding surgical choice needs to be tailored to ensure that men diagnosed with prostate cancer are fully informed of not only short term surgical and physical outcomes such as erectile dysfunction and incontinence, but also of potential issues with regards to masculinity, lifestyle and sexual health. Copyright © 2013. Published by Elsevier Ltd.

  1. Turning and Radius Deviation Correction for a Hexapod Walking Robot Based on an Ant-Inspired Sensory Strategy

    PubMed Central

    Guo, Tong; Liu, Qiong; Zhu, Qianwei; Zhao, Xiangmo; Jin, Bo

    2017-01-01

    In order to find a common approach to plan the turning of a bio-inspired hexapod robot, a locomotion strategy for turning and deviation correction of a hexapod walking robot based on the biological behavior and sensory strategy of ants. A series of experiments using ants were carried out where the gait and the movement form of ants was studied. Taking the results of the ant experiments as inspiration by imitating the behavior of ants during turning, an extended turning algorithm based on arbitrary gait was proposed. Furthermore, after the observation of the radius adjustment of ants during turning, a radius correction algorithm based on the arbitrary gait of the hexapod robot was raised. The radius correction surface function was generated by fitting the correction data, which made it possible for the robot to move in an outdoor environment without the positioning system and environment model. The proposed algorithm was verified on the hexapod robot experimental platform. The turning and radius correction experiment of the robot with several gaits were carried out. The results indicated that the robot could follow the ideal radius and maintain stability, and the proposed ant-inspired turning strategy could easily make free turns with an arbitrary gait. PMID:29168742

  2. The magic glove: a gesture-based remote controller for intelligent mobile robots

    NASA Astrophysics Data System (ADS)

    Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark

    2012-01-01

    This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.

  3. Turning and Radius Deviation Correction for a Hexapod Walking Robot Based on an Ant-Inspired Sensory Strategy.

    PubMed

    Zhu, Yaguang; Guo, Tong; Liu, Qiong; Zhu, Qianwei; Zhao, Xiangmo; Jin, Bo

    2017-11-23

    Abstract : In order to find a common approach to plan the turning of a bio-inspired hexapod robot, a locomotion strategy for turning and deviation correction of a hexapod walking robot based on the biological behavior and sensory strategy of ants. A series of experiments using ants were carried out where the gait and the movement form of ants was studied. Taking the results of the ant experiments as inspiration by imitating the behavior of ants during turning, an extended turning algorithm based on arbitrary gait was proposed. Furthermore, after the observation of the radius adjustment of ants during turning, a radius correction algorithm based on the arbitrary gait of the hexapod robot was raised. The radius correction surface function was generated by fitting the correction data, which made it possible for the robot to move in an outdoor environment without the positioning system and environment model. The proposed algorithm was verified on the hexapod robot experimental platform. The turning and radius correction experiment of the robot with several gaits were carried out. The results indicated that the robot could follow the ideal radius and maintain stability, and the proposed ant-inspired turning strategy could easily make free turns with an arbitrary gait.

  4. A graphical, rule based robotic interface system

    NASA Technical Reports Server (NTRS)

    Mckee, James W.; Wolfsberger, John

    1988-01-01

    The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.

  5. Soft Biomimetic Fish Robot Made of Dielectric Elastomer Actuators.

    PubMed

    Shintake, Jun; Cacucciolo, Vito; Shea, Herbert; Floreano, Dario

    2018-06-29

    This article presents the design, fabrication, and characterization of a soft biomimetic robotic fish based on dielectric elastomer actuators (DEAs) that swims by body and/or caudal fin (BCF) propulsion. BCF is a promising locomotion mechanism that potentially offers swimming at higher speeds and acceleration rates, and efficient locomotion. The robot consists of laminated silicone layers wherein two DEAs are used in an antagonistic configuration, generating undulating fish-like motion. The design of the robot is guided by a mathematical model based on the Euler-Bernoulli beam theory and takes account of the nonuniform geometry of the robot and of the hydrodynamic effect of water. The modeling results were compared with the experimental results obtained from the fish robot with a total length of 150 mm, a thickness of 0.75 mm, and weight of 4.4 g. We observed that the frequency peaks in the measured thrust force produced by the robot are similar to the natural frequencies computed by the model. The peak swimming speed of the robot was 37.2 mm/s (0.25 body length/s) at 0.75 Hz. We also observed that the modal shape of the robot at this frequency corresponds to the first natural mode. The swimming of the robot resembles real fish and displays a Strouhal number very close to those of living fish. These results suggest the high potential of DEA-based underwater robots relying on BCF propulsion, and applicability of our design and fabrication methods.

  6. Improving multisensor estimation of heavy-to-extreme precipitation via conditional bias-penalized optimal estimation

    NASA Astrophysics Data System (ADS)

    Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.

    2018-01-01

    A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.

  7. Novel Multisensor Probe for Monitoring Bladder Temperature During Locoregional Chemohyperthermia for Nonmuscle-Invasive Bladder Cancer: Technical Feasibility Study

    PubMed Central

    Geijsen, Debby E.; Zum Vörde Sive Vörding, Paul J.; Schooneveldt, Gerben; Sijbrands, Jan; Hulshof, Maarten C.; de la Rosette, Jean; de Reijke, Theo M.; Crezee, Hans

    2013-01-01

    Abstract Background and Purpose: The effectiveness of locoregional hyperthermia combined with intravesical instillation of mitomycin C to reduce the risk of recurrence and progression of intermediate- and high-risk nonmuscle-invasive bladder cancer is currently investigated in clinical trials. Clinically effective locoregional hyperthermia delivery necessitates adequate thermal dosimetry; thus, optimal thermometry methods are needed to monitor accurately the temperature distribution throughout the bladder wall. The aim of the study was to evaluate the technical feasibility of a novel intravesical device (multi-sensor probe) developed to monitor the local bladder wall temperatures during loco-regional C-HT. Materials and Methods: A multisensor thermocouple probe was designed for deployment in the human bladder, using special sensors to cover the bladder wall in different directions. The deployment of the thermocouples against the bladder wall was evaluated with visual, endoscopic, and CT imaging in bladder phantoms, porcine models, and human bladders obtained from obduction for bladder volumes and different deployment sizes of the probe. Finally, porcine bladders were embedded in a phantom and subjected to locoregional heating to compare probe temperatures with additional thermometry inside and outside the bladder wall. Results: The 7.5 cm thermocouple probe yielded optimal bladder wall contact, adapting to different bladder volumes. Temperature monitoring was shown to be accurate and representative for the actual bladder wall temperature. Conclusions: Use of this novel multisensor probe could yield a more accurate monitoring of the bladder wall temperature during locoregional chemohyperthermia. PMID:24112045

  8. LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval

    NASA Astrophysics Data System (ADS)

    Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan

    2013-01-01

    As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.

  9. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  10. A Mobile Robots Experimental Environment with Event-Based Wireless Communication

    PubMed Central

    Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián

    2013-01-01

    An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented. PMID:23881139

  11. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  12. Towards Optimal Platform-Based Robot Design for Ankle Rehabilitation: The State of the Art and Future Prospects.

    PubMed

    Miao, Qing; Zhang, Mingming; Wang, Congzhe; Li, Hongsheng

    2018-01-01

    This review aims to compare existing robot-assisted ankle rehabilitation techniques in terms of robot design. Included studies mainly consist of selected papers in two published reviews involving a variety of robot-assisted ankle rehabilitation techniques. A free search was also made in Google Scholar and Scopus by using keywords "ankle ∗ ," and "robot ∗ ," and ("rehabilitat ∗ " or "treat ∗ "). The search is limited to English-language articles published between January 1980 and September 2016. Results show that existing robot-assisted ankle rehabilitation techniques can be classified into wearable exoskeleton and platform-based devices. Platform-based devices are mostly developed for the treatment of a variety of ankle musculoskeletal and neurological injuries, while wearable ones focus more on ankle-related gait training. In terms of robot design, comparative analysis indicates that an ideal ankle rehabilitation robot should have aligned rotation center as the ankle joint, appropriate workspace, and actuation torque, no matter how many degrees of freedom (DOFs) it has. Single-DOF ankle robots are mostly developed for specific applications, while multi-DOF devices are more suitable for comprehensive ankle rehabilitation exercises. Other factors including posture adjustability and sensing functions should also be considered to promote related clinical applications. An ankle rehabilitation robot with reconfigurability to maximize its functions will be a new research point towards optimal design, especially on parallel mechanisms.

  13. Extensibility in local sensor based planning for hyper-redundant manipulators (robot snakes)

    NASA Technical Reports Server (NTRS)

    Choset, Howie; Burdick, Joel

    1994-01-01

    Partial Shape Modification (PSM) is a local sensor feedback method used for hyper-redundant robot manipulators, in which the redundancy is very large or infinite such as that of a robot snake. This aspect of redundancy enables local obstacle avoidance and end-effector placement in real time. Due to the large number of joints or actuators in a hyper-redundant manipulator, small displacement errors of such easily accumulate to large errors in the position of the tip relative to the base. The accuracy could be improved by a local sensor based planning method in which sensors are distributed along the length of the hyper-redundant robot. This paper extends the local sensor based planning strategy beyond the limitations of the fixed length of such a manipulator when its joint limits are met. This is achieved with an algorithm where the length of the deforming part of the robot is variable. Thus , the robot's local avoidance of obstacles is improved through the enhancement of its extensibility.

  14. Regulation and Entrainment in Human-Robot Interaction

    DTIC Science & Technology

    2000-01-01

    applications for domestic, health care related, or entertainment based robots motivate the development of robots that can socially interact with, learn...picture shows WE-3RII, an expressive face robot developed at Waseda University. The middle right picture shows Robita, an upper-torso robot also... developed at Waseda University to track speaking turns. The far right picture shows our expressive robot, Kismet, developed at MIT. The two leftmost photos

  15. Development of inspection robots for bridge cables.

    PubMed

    Yun, Hae-Bum; Kim, Se-Hoon; Wu, Liuliu; Lee, Jong-Jae

    2013-01-01

    This paper presents the bridge cable inspection robot developed in Korea. Two types of the cable inspection robots were developed for cable-suspension bridges and cable-stayed bridge. The design of the robot system and performance of the NDT techniques associated with the cable inspection robot are discussed. A review on recent advances in emerging robot-based inspection technologies for bridge cables and current bridge cable inspection methods is also presented.

  16. Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction.

    PubMed

    Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank

    2017-12-14

    Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.

  17. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.

    PubMed

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-12-26

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  18. A robotically constructed production and supply base on Phobos

    NASA Astrophysics Data System (ADS)

    1989-05-01

    PHOBIA Corporation is involved with the design of a man-tenable robotically constructed, bootstrap base on Mars' moon, Phobos. This base will be a pit-stop for future manned missions to Mars and beyond and will be a control facility during the robotic construction of a Martian base. An introduction is given to the concepts and the ground rules followed during the design process. Details of a base design and its location are given along with information about some of the subsystems. Since a major purpose of the base is to supply fuel to spacecraft so they can limit their fuel mass, mining and production systems are discussed. Surface support activities such as docks, anchors, and surface transportation systems are detailed. Several power supplies for the base are investigated and include fuel cells and a nuclear reactor. Tasks for the robots are defined along with descriptions of the robots capable of completing the tasks. Finally, failure modes for the entire PHOBIA Corporation design are presented along with an effects analysis and preventative recommendations.

  19. A robotically constructed production and supply base on Phobos

    NASA Technical Reports Server (NTRS)

    1989-01-01

    PHOBIA Corporation is involved with the design of a man-tenable robotically constructed, bootstrap base on Mars' moon, Phobos. This base will be a pit-stop for future manned missions to Mars and beyond and will be a control facility during the robotic construction of a Martian base. An introduction is given to the concepts and the ground rules followed during the design process. Details of a base design and its location are given along with information about some of the subsystems. Since a major purpose of the base is to supply fuel to spacecraft so they can limit their fuel mass, mining and production systems are discussed. Surface support activities such as docks, anchors, and surface transportation systems are detailed. Several power supplies for the base are investigated and include fuel cells and a nuclear reactor. Tasks for the robots are defined along with descriptions of the robots capable of completing the tasks. Finally, failure modes for the entire PHOBIA Corporation design are presented along with an effects analysis and preventative recommendations.

  20. KC-135 materials handling robotics

    NASA Technical Reports Server (NTRS)

    Workman, Gary L.

    1991-01-01

    Robot dynamics and control will become an important issue for implementing productive platforms in space. Robotic operations will become necessary for man-tended stations and for efficient performance of routine operations in a manned platform. The current constraints on the use of robotic devices in a microgravity environment appears to be due to an anticipated increase in acceleration levels due to manipulator motion and for safety concerns. The objective of this study will be to provide baseline data to meet that need. Most texts and papers dealing with the kinematics and dynamics of robots assume that the manipulator is composed of joints separated by rigid links. However, in recent years several groups have begun to study the dynamics of flexible manipulators, primarily for applying robots in space and for improving the efficiency and precision of robotic systems. Robotic systems which are being planned for implementation in space have a number of constraints to overcome. Additional concepts which have to be worked out in any robotic implementation for a space platform include teleoperation and degree of autonomous control. Some significant results in developing a robotic workcell for performing robotics research on the KC-135 aircraft in preperation for space-based robotics applications in the future were generated. In addition, it was shown that TREETOPS can be used to simulate the dynamics of robot manipulators for both space and ground-based applications.

  1. Effect of Robotics-Enhanced Inquiry-Based Learning in Elementary Science Education in South Korea

    ERIC Educational Resources Information Center

    Park, Jungho

    2015-01-01

    Much research has been conducted in educational robotics, a new instructional technology, for K-12 education. However, there are arguments on the effect of robotics and limited empirical evidence to investigate the impact of robotics in science learning. Also most robotics studies were carried in an informal educational setting. This study…

  2. Case Studies of a Robot-Based Game to Shape Interests and Hone Proportional Reasoning Skills

    ERIC Educational Resources Information Center

    Alfieri, Louis; Higashi, Ross; Shoop, Robin; Schunn, Christian D.

    2015-01-01

    Background: Robot-math is a term used to describe mathematics instruction centered on engineering, particularly robotics. This type of instruction seeks first to make the mathematics skills useful for robotics-centered challenges, and then to help students extend (transfer) those skills. A robot-math intervention was designed to target the…

  3. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition

    PubMed Central

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  4. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  5. Usability Study and Heuristic Evaluation of the Applied Robotics for Installations and Base Operations (ARIBO) Driverless Vehicle Reservation Application ARIBO Mobile

    DTIC Science & Technology

    2017-03-01

    ARL-TN-0814 ● MAR 2017 US Army Research Laboratory Usability Study and Heuristic Evaluation of the Applied Robotics for...ARL-TN-0814 ● MAR 2017 US Army Research Laboratory Usability Study and Heuristic Evaluation of the Applied Robotics for...Heuristic Evaluation of the Applied Robotics for Installations and Base Operations (ARIBO) Driverless Vehicle Reservation Application ARIBO Mobile 5a

  6. Simulation-based intelligent robotic agent for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.

    1990-01-01

    A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.

  7. Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors.

    PubMed

    Deng, Fucheng; Zhu, Xiaorui; He, Chao

    2017-09-13

    Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications.

  8. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines

    PubMed Central

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-01-01

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization. PMID:26343660

  9. Multi-Sensor Detection with Particle Swarm Optimization for Time-Frequency Coded Cooperative WSNs Based on MC-CDMA for Underground Coal Mines.

    PubMed

    Xu, Jingjing; Yang, Wei; Zhang, Linyuan; Han, Ruisong; Shao, Xiaotao

    2015-08-27

    In this paper, a wireless sensor network (WSN) technology adapted to underground channel conditions is developed, which has important theoretical and practical value for safety monitoring in underground coal mines. According to the characteristics that the space, time and frequency resources of underground tunnel are open, it is proposed to constitute wireless sensor nodes based on multicarrier code division multiple access (MC-CDMA) to make full use of these resources. To improve the wireless transmission performance of source sensor nodes, it is also proposed to utilize cooperative sensors with good channel conditions from the sink node to assist source sensors with poor channel conditions. Moreover, the total power of the source sensor and its cooperative sensors is allocated on the basis of their channel conditions to increase the energy efficiency of the WSN. To solve the problem that multiple access interference (MAI) arises when multiple source sensors transmit monitoring information simultaneously, a kind of multi-sensor detection (MSD) algorithm with particle swarm optimization (PSO), namely D-PSO, is proposed for the time-frequency coded cooperative MC-CDMA WSN. Simulation results show that the average bit error rate (BER) performance of the proposed WSN in an underground coal mine is improved significantly by using wireless sensor nodes based on MC-CDMA, adopting time-frequency coded cooperative transmission and D-PSO algorithm with particle swarm optimization.

  10. An integrated multi-sensor fusion-based deep feature learning approach for rotating machinery diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Hu, Youmin; Wang, Yan; Wu, Bo; Fan, Jikai; Hu, Zhongxu

    2018-05-01

    The diagnosis of complicated fault severity problems in rotating machinery systems is an important issue that affects the productivity and quality of manufacturing processes and industrial applications. However, it usually suffers from several deficiencies. (1) A considerable degree of prior knowledge and expertise is required to not only extract and select specific features from raw sensor signals, and but also choose a suitable fusion for sensor information. (2) Traditional artificial neural networks with shallow architectures are usually adopted and they have a limited ability to learn the complex and variable operating conditions. In multi-sensor-based diagnosis applications in particular, massive high-dimensional and high-volume raw sensor signals need to be processed. In this paper, an integrated multi-sensor fusion-based deep feature learning (IMSFDFL) approach is developed to identify the fault severity in rotating machinery processes. First, traditional statistics and energy spectrum features are extracted from multiple sensors with multiple channels and combined. Then, a fused feature vector is constructed from all of the acquisition channels. Further, deep feature learning with stacked auto-encoders is used to obtain the deep features. Finally, the traditional softmax model is applied to identify the fault severity. The effectiveness of the proposed IMSFDFL approach is primarily verified by a one-stage gearbox experimental platform that uses several accelerometers under different operating conditions. This approach can identify fault severity more effectively than the traditional approaches.

  11. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    NASA Astrophysics Data System (ADS)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  12. A motion sensing-based framework for robotic manipulation.

    PubMed

    Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing

    2016-01-01

    To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.

  13. Research on modeling and motion simulation of a spherical space robot with telescopic manipulator based on virtual prototype technology

    NASA Astrophysics Data System (ADS)

    Shi, Chengkun; Sun, Hanxu; Jia, Qingxuan; Zhao, Kailiang

    2009-05-01

    For realizing omni-directional movement and operating task of spherical space robot system, this paper describes an innovated prototype and analyzes dynamic characteristics of a spherical rolling robot with telescopic manipulator. Based on the Newton-Euler equations, the kinematics and dynamic equations of the spherical robot's motion are instructed detailedly. Then the motion simulations of the robot in different environments are developed with ADAMS. The simulation results validate the mathematics model of the system. And the dynamic model establishes theoretical basis for the latter job.

  14. Path Planning for Robot based on Chaotic Artificial Potential Field Method

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng

    2018-03-01

    Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.

  15. IMU-Based Online Kinematic Calibration of Robot Manipulator

    PubMed Central

    2013-01-01

    Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods. PMID:24302854

  16. Endonasal Skull Base Tumor Removal Using Concentric Tube Continuum Robots: A Phantom Study.

    PubMed

    Swaney, Philip J; Gilbert, Hunter B; Webster, Robert J; Russell, Paul T; Weaver, Kyle D

    2015-03-01

    Objectives The purpose of this study is to experimentally evaluate the use of concentric tube continuum robots in endonasal skull base tumor removal. This new type of surgical robot offers many advantages over existing straight and rigid surgical tools including added dexterity, the ability to scale movements, and the ability to rotate the end effector while leaving the robot fixed in space. In this study, a concentric tube continuum robot was used to remove simulated pituitary tumors from a skull phantom. Design The robot was teleoperated by experienced skull base surgeons to remove a phantom pituitary tumor within a skull. Percentage resection was measured by weight. Resection duration was timed. Setting Academic research laboratory. Main Outcome Measures Percentage removal of tumor material and procedure duration. Results Average removal percentage of 79.8 ± 5.9% and average time to complete procedure of 12.5 ± 4.1 minutes (n = 20). Conclusions The robotic system presented here for use in endonasal skull base surgery shows promise in improving the dexterity, tool motion, and end effector capabilities currently available with straight and rigid tools while remaining an effective tool for resecting the tumor.

  17. Review on design and control aspects of ankle rehabilitation robots.

    PubMed

    Jamwal, Prashant K; Hussain, Shahid; Xie, Sheng Q

    2015-03-01

    Ankle rehabilitation robots can play an important role in improving outcomes of the rehabilitation treatment by assisting therapists and patients in number of ways. Consequently, few robot designs have been proposed by researchers which fall under either of the two categories, namely, wearable robots or platform-based robots. This paper presents a review of both kinds of ankle robots along with a brief analysis of their design, actuation and control approaches. While reviewing these designs it was observed that most of them are undesirably inspired by industrial robot designs. Taking note of the design concerns of current ankle robots, few improvements in the ankle robot designs have also been suggested. Conventional position control or force control approaches, being used in the existing ankle robots, have been reviewed. Apparently, opportunities of improvement also exist in the actuation as well as control of ankle robots. Subsequently, a discussion on most recent research in the development of novel actuators and advanced controllers based on appropriate physical and cognitive human-robot interaction has also been included in this review. Implications for Rehabilitation Ankle joint functions are restricted/impaired as a consequence of stroke or injury during sports or otherwise. Robots can help in reinstating functions faster and can also work as tool for recording rehabilitation data useful for further analysis. Evolution of ankle robots with respect to their design and control aspects has been discussed in the present paper and a novel design with futuristic control approach has been proposed.

  18. A neural network-based exploratory learning and motor planning system for co-robots

    PubMed Central

    Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640

  19. Analysis of several Boolean operation based trajectory generation strategies for automotive spray applications

    NASA Astrophysics Data System (ADS)

    Gao, Guoyou; Jiang, Chunsheng; Chen, Tao; Hui, Chun

    2018-05-01

    Industrial robots are widely used in various processes of surface manufacturing, such as thermal spraying. The established robot programming methods are highly time-consuming and not accurate enough to fulfil the demands of the actual market. There are many off-line programming methods developed to reduce the robot programming effort. This work introduces the principle of several based robot trajectory generation strategy on planar surface and curved surface. Since the off-line programming software is widely used and thus facilitates the robot programming efforts and improves the accuracy of robot trajectory, the analysis of this work is based on the second development of off-line programming software Robot studio™. To meet the requirements of automotive paint industry, this kind of software extension helps provide special functions according to the users defined operation parameters. The presented planning strategy generates the robot trajectory by moving an orthogonal surface according to the information of coating surface, a series of intersection curves are then employed to generate the trajectory points. The simulation results show that the path curve created with this method is successive and smooth, which corresponds to the requirements of automotive spray industrial applications.

  20. A neural network-based exploratory learning and motor planning system for co-robots.

    PubMed

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  1. Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology.

    PubMed

    Hsu, Yu-Liang; Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen

    2017-07-15

    This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents' wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident's feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment.

  2. BreedVision--a multi-sensor platform for non-destructive field-based phenotyping in plant breeding.

    PubMed

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-02-27

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies.

  3. A wireless sensor network deployment for rural and forest fire detection and verification.

    PubMed

    Lloret, Jaime; Garcia, Miguel; Bri, Diana; Sendra, Sandra

    2009-01-01

    Forest and rural fires are one of the main causes of environmental degradation in Mediterranean countries. Existing fire detection systems only focus on detection, but not on the verification of the fire. However, almost all of them are just simulations, and very few implementations can be found. Besides, the systems in the literature lack scalability. In this paper we show all the steps followed to perform the design, research and development of a wireless multisensor network which mixes sensors with IP cameras in a wireless network in order to detect and verify fire in rural and forest areas of Spain. We have studied how many cameras, sensors and access points are needed to cover a rural or forest area, and the scalability of the system. We have developed a multisensor and when it detects a fire, it sends a sensor alarm through the wireless network to a central server. The central server selects the closest wireless cameras to the multisensor, based on a software application, which are rotated to the sensor that raised the alarm, and sends them a message in order to receive real-time images from the zone. The camera lets the fire fighters corroborate the existence of a fire and avoid false alarms. In this paper, we show the test performance given by a test bench formed by four wireless IP cameras in several situations and the energy consumed when they are transmitting. Moreover, we study the energy consumed by each device when the system is set up. The wireless sensor network could be connected to Internet through a gateway and the images of the cameras could be seen from any part of the world.

  4. BreedVision — A Multi-Sensor Platform for Non-Destructive Field-Based Phenotyping in Plant Breeding

    PubMed Central

    Busemeyer, Lucas; Mentrup, Daniel; Möller, Kim; Wunder, Erik; Alheit, Katharina; Hahn, Volker; Maurer, Hans Peter; Reif, Jochen C.; Würschum, Tobias; Müller, Joachim; Rahe, Florian; Ruckelshausen, Arno

    2013-01-01

    To achieve the food and energy security of an increasing World population likely to exceed nine billion by 2050 represents a major challenge for plant breeding. Our ability to measure traits under field conditions has improved little over the last decades and currently constitutes a major bottleneck in crop improvement. This work describes the development of a tractor-pulled multi-sensor phenotyping platform for small grain cereals with a focus on the technological development of the system. Various optical sensors like light curtain imaging, 3D Time-of-Flight cameras, laser distance sensors, hyperspectral imaging as well as color imaging are integrated into the system to collect spectral and morphological information of the plants. The study specifies: the mechanical design, the system architecture for data collection and data processing, the phenotyping procedure of the integrated system, results from field trials for data quality evaluation, as well as calibration results for plant height determination as a quantified example for a platform application. Repeated measurements were taken at three developmental stages of the plants in the years 2011 and 2012 employing triticale (×Triticosecale Wittmack L.) as a model species. The technical repeatability of measurement results was high for nearly all different types of sensors which confirmed the high suitability of the platform under field conditions. The developed platform constitutes a robust basis for the development and calibration of further sensor and multi-sensor fusion models to measure various agronomic traits like plant moisture content, lodging, tiller density or biomass yield, and thus, represents a major step towards widening the bottleneck of non-destructive phenotyping for crop improvement and plant genetic studies. PMID:23447014

  5. SBIR Phase II Final Report: Low cost Autonomous NMR and Multi-sensor Soil Monitoring Instrument

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walsh, David O.

    In this 32-month SBIR Phase 2 program, Vista Clara designed, assembled and successfully tested four new NMR instruments for soil moisture measurement and monitoring: An enhanced performance man-portable Dart NMR logging probe and control unit for rapid, mobile measurement in core holes and 2” PVC access wells; A prototype 4-level Dart NMR monitoring probe and prototype multi-sensor soil monitoring control unit for long-term unattended monitoring of soil moisture and other measurements in-situ; A non-invasive 1m x 1m Discus NMR soil moisture sensor with surface based magnet/coil array for rapid measurement of soil moisture in the top 50 cm of themore » subsurface; A non-invasive, ultra-lightweight Earth’s field surface NMR instrument for non-invasive measurement and mapping of soil moisture in the top 3 meters of the subsurface. The Phase 2 research and development achieved most, but not all of our technical objectives. The single-coil Dart in-situ sensor and control unit were fully developed, demonstrated and successfully commercialized within the Phase 2 period of performance. The multi-level version of the Dart probe was designed, assembled and demonstrated in Phase 2, but its final assembly and testing were delayed until close to the end of the Phase 2 performance period, which limited our opportunities for demonstration in field settings. Likewise, the multi-sensor version of the Dart control unit was designed and assembled, but not in time for it to be deployed for any long-term monitoring demonstrations. The prototype ultra-lightweight surface NMR instrument was developed and demonstrated, and this result will be carried forward into the development of a new flexible surface NMR instrument and commercial product in 2018.« less

  6. Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology

    PubMed Central

    Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen

    2017-01-01

    This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents’ wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident’s feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment. PMID:28714884

  7. Can single empirical algorithms accurately predict inland shallow water quality status from high resolution, multi-sensor, multi-temporal satellite data?

    NASA Astrophysics Data System (ADS)

    Theologou, I.; Patelaki, M.; Karantzalos, K.

    2015-04-01

    Assessing and monitoring water quality status through timely, cost effective and accurate manner is of fundamental importance for numerous environmental management and policy making purposes. Therefore, there is a current need for validated methodologies which can effectively exploit, in an unsupervised way, the enormous amount of earth observation imaging datasets from various high-resolution satellite multispectral sensors. To this end, many research efforts are based on building concrete relationships and empirical algorithms from concurrent satellite and in-situ data collection campaigns. We have experimented with Landsat 7 and Landsat 8 multi-temporal satellite data, coupled with hyperspectral data from a field spectroradiometer and in-situ ground truth data with several physico-chemical and other key monitoring indicators. All available datasets, covering a 4 years period, in our case study Lake Karla in Greece, were processed and fused under a quantitative evaluation framework. The performed comprehensive analysis posed certain questions regarding the applicability of single empirical models across multi-temporal, multi-sensor datasets towards the accurate prediction of key water quality indicators for shallow inland systems. Single linear regression models didn't establish concrete relations across multi-temporal, multi-sensor observations. Moreover, the shallower parts of the inland system followed, in accordance with the literature, different regression patterns. Landsat 7 and 8 resulted in quite promising results indicating that from the recreation of the lake and onward consistent per-sensor, per-depth prediction models can be successfully established. The highest rates were for chl-a (r2=89.80%), dissolved oxygen (r2=88.53%), conductivity (r2=88.18%), ammonium (r2=87.2%) and pH (r2=86.35%), while the total phosphorus (r2=70.55%) and nitrates (r2=55.50%) resulted in lower correlation rates.

  8. A Wireless Sensor Network Deployment for Rural and Forest Fire Detection and Verification

    PubMed Central

    Lloret, Jaime; Garcia, Miguel; Bri, Diana; Sendra, Sandra

    2009-01-01

    Forest and rural fires are one of the main causes of environmental degradation in Mediterranean countries. Existing fire detection systems only focus on detection, but not on the verification of the fire. However, almost all of them are just simulations, and very few implementations can be found. Besides, the systems in the literature lack scalability. In this paper we show all the steps followed to perform the design, research and development of a wireless multisensor network which mixes sensors with IP cameras in a wireless network in order to detect and verify fire in rural and forest areas of Spain. We have studied how many cameras, sensors and access points are needed to cover a rural or forest area, and the scalability of the system. We have developed a multisensor and when it detects a fire, it sends a sensor alarm through the wireless network to a central server. The central server selects the closest wireless cameras to the multisensor, based on a software application, which are rotated to the sensor that raised the alarm, and sends them a message in order to receive real-time images from the zone. The camera lets the fire fighters corroborate the existence of a fire and avoid false alarms. In this paper, we show the test performance given by a test bench formed by four wireless IP cameras in several situations and the energy consumed when they are transmitting. Moreover, we study the energy consumed by each device when the system is set up. The wireless sensor network could be connected to Internet through a gateway and the images of the cameras could be seen from any part of the world. PMID:22291533

  9. Inverse kinematic solution for near-simple robots and its application to robot calibration

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A.; Roston, Gerald P.

    1986-01-01

    This paper provides an inverse kinematic solution for a class of robot manipulators called near-simple manipulators. The kinematics of these manipulators differ from those of simple-robots by small parameter variations. Although most robots are by design simple, in practice, due to manufacturing tolerances, every robot is near-simple. The method in this paper gives an approximate inverse kinematics solution for real time applications based on the nominal solution for these robots. The validity of the results are tested both by a simulation study and by applying the algorithm to a PUMA robot.

  10. A Mobile, Map-Based Tasking Interface for Human-Robot Interaction

    DTIC Science & Technology

    2010-12-01

    A MOBILE, MAP-BASED TASKING INTERFACE FOR HUMAN-ROBOT INTERACTION By Eli R. Hooten Thesis Submitted to the Faculty of the Graduate School of...SUBTITLE A Mobile, Map-Based Tasking Interface for Human-Robot Interaction 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...3 II.1 Interactive Modalities and Multi-Touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 II.2

  11. Towards autonomous locomotion: CPG-based control of smooth 3D slithering gait transition of a snake-like robot.

    PubMed

    Bing, Zhenshan; Cheng, Long; Chen, Guang; Röhrbein, Florian; Huang, Kai; Knoll, Alois

    2017-04-04

    Snake-like robots with 3D locomotion ability have significant advantages of adaptive travelling in diverse complex terrain over traditional legged or wheeled mobile robots. Despite numerous developed gaits, these snake-like robots suffer from unsmooth gait transitions by changing the locomotion speed, direction, and body shape, which would potentially cause undesired movement and abnormal torque. Hence, there exists a knowledge gap for snake-like robots to achieve autonomous locomotion. To address this problem, this paper presents the smooth slithering gait transition control based on a lightweight central pattern generator (CPG) model for snake-like robots. First, based on the convergence behavior of the gradient system, a lightweight CPG model with fast computing time was designed and compared with other widely adopted CPG models. Then, by reshaping the body into a more stable geometry, the slithering gait was modified, and studied based on the proposed CPG model, including the gait transition of locomotion speed, moving direction, and body shape. In contrast to sinusoid-based method, extensive simulations and prototype experiments finally demonstrated that smooth slithering gait transition can be effectively achieved using the proposed CPG-based control method without generating undesired locomotion and abnormal torque.

  12. Robotic Anterior and Midline Skull Base Surgery: Preclinical Investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Bert W.; Weinstein, Gregory S.

    Purpose: To develop a minimally invasive surgical technique to access the midline and anterior skull base using the optical and technical advantages of robotic surgical instrumentation. Methods and Materials: Ten experimental procedures focusing on approaches to the nasopharynx, clivus, sphenoid, pituitary sella, and suprasellar regions were performed on one cadaver and one live mongrel dog. Both the cadaver and canine procedures were performed in an approved training facility using the da Vinci Surgical Robot. For the canine experiments, a transoral robotic surgery (TORS) approach was used, and for the cadaver a newly developed combined cervical-transoral robotic surgery (C-TORS) approach wasmore » investigated and compared with standard TORS. The ability to access and dissect tissues within the various areas of the midline and anterior skull base were evaluated, and techniques to enhance visualization and instrumentation were developed. Results: Standard TORS approaches did not provide adequate access to the midline and anterior skull base; however, the newly developed C-TORS approach was successful in providing the surgical access to these regions of the skull base. Conclusion: Robotic surgery is an exciting minimally invasive approach to the skull base that warrants continued preclinical investigation and development.« less

  13. Development of Inspection Robots for Bridge Cables

    PubMed Central

    Kim, Se-Hoon; Lee, Jong-Jae

    2013-01-01

    This paper presents the bridge cable inspection robot developed in Korea. Two types of the cable inspection robots were developed for cable-suspension bridges and cable-stayed bridge. The design of the robot system and performance of the NDT techniques associated with the cable inspection robot are discussed. A review on recent advances in emerging robot-based inspection technologies for bridge cables and current bridge cable inspection methods is also presented. PMID:24459453

  14. Analysis of Unmanned Systems in Military Logistics

    DTIC Science & Technology

    2016-12-01

    opportunities to employ unmanned systems to support logistic operations. 14. SUBJECT TERMS unmanned systems, robotics , UAVs, UGVs, USVs, UUVs, military...Industrial Robots at Warehouses / Distribution Centers .............................................................................. 17 2. Unmanned...Autonomous Robot Gun Turret. Source: Blain (2010)................................................... 33 Figure 4. Robot Sentries for Base Patrol

  15. An integrated gait rehabilitation training based on Functional Electrical Stimulation cycling and overground robotic exoskeleton in complete spinal cord injury patients: Preliminary results.

    PubMed

    Mazzoleni, S; Battini, E; Rustici, A; Stampacchia, G

    2017-07-01

    The aim of this study is to investigate the effects of an integrated gait rehabilitation training based on Functional Electrical Stimulation (FES)-cycling and overground robotic exoskeleton in a group of seven complete spinal cord injury patients on spasticity and patient-robot interaction. They underwent a robot-assisted rehabilitation training based on two phases: n=20 sessions of FES-cycling followed by n= 20 sessions of robot-assisted gait training based on an overground robotic exoskeleton. The following clinical outcome measures were used: Modified Ashworth Scale (MAS), Numerical Rating Scale (NRS) on spasticity, Penn Spasm Frequency Scale (PSFS), Spinal Cord Independence Measure Scale (SCIM), NRS on pain and International Spinal Cord Injury Pain Data Set (ISCI). Clinical outcome measures were assessed before (T0) after (T1) the FES-cycling training and after (T2) the powered overground gait training. The ability to walk when using exoskeleton was assessed by means of 10 Meter Walk Test (10MWT), 6 Minute Walk Test (6MWT), Timed Up and Go test (TUG), standing time, walking time and number of steps. Statistically significant changes were found on the MAS score, NRS-spasticity, 6MWT, TUG, standing time and number of steps. The preliminary results of this study show that an integrated gait rehabilitation training based on FES-cycling and overground robotic exoskeleton in complete SCI patients can provide a significant reduction of spasticity and improvements in terms of patient-robot interaction.

  16. Analysis and experimental kinematics of a skid-steering wheeled robot based on a laser scanner sensor.

    PubMed

    Wang, Tianmiao; Wu, Yao; Liang, Jianhong; Han, Chenhao; Chen, Jiao; Zhao, Qiteng

    2015-04-24

    Skid-steering mobile robots are widely used because of their simple mechanism and robustness. However, due to the complex wheel-ground interactions and the kinematic constraints, it is a challenge to understand the kinematics and dynamics of such a robotic platform. In this paper, we develop an analysis and experimental kinematic scheme for a skid-steering wheeled vehicle based-on a laser scanner sensor. The kinematics model is established based on the boundedness of the instantaneous centers of rotation (ICR) of treads on the 2D motion plane. The kinematic parameters (the ICR coefficient , the path curvature variable and robot speed ), including the effect of vehicle dynamics, are introduced to describe the kinematics model. Then, an exact but costly dynamic model is used and the simulation of this model's stationary response for the vehicle shows a qualitative relationship for the specified parameters and . Moreover, the parameters of the kinematic model are determined based-on a laser scanner localization experimental analysis method with a skid-steering robotic platform, Pioneer P3-AT. The relationship between the ICR coefficient and two physical factors is studied, i.e., the radius of the path curvature and the robot speed . An empirical function-based relationship between the ICR coefficient of the robot and the path parameters is derived. To validate the obtained results, it is empirically demonstrated that the proposed kinematics model significantly improves the dead-reckoning performance of this skid-steering robot.

  17. ARC - A source of multisensor satellite data for polar science

    NASA Technical Reports Server (NTRS)

    Van Woert, Michael L.; Whritner, Robert H.; Waliser, Duane E.; Bromwich, David H.; Comiso, J. C.

    1992-01-01

    The NSF's Antarctic Research Center (ARC) has been established to furnish real-time polar-orbiting satellite data in support of Antarctic field studies, as well as to maintain a multisensor satellite data (MSD) archive for retrospective data analysis. An account is presently given of the ways in which the complementary nature of an MSD set can deepen understanding of Antarctic physical processes. An active microwave SAR with 30-m resolution and a radar altimeter will be added to the ARC resources later in this decade, as will the Earth Observing System.

  18. The use of multisensor images for Earth Science applications

    NASA Technical Reports Server (NTRS)

    Evans, D.; Stromberg, B.

    1983-01-01

    The use of more than one remote sensing technique is particularly important for Earth Science applications because of the compositional and textural information derivable from the images. The ability to simultaneously analyze images acquired by different sensors requires coregistration of the multisensor image data sets. In order to insure pixel to pixel registration in areas of high relief, images must be rectified to eliminate topographic distortions. Coregistered images can be analyzed using a variety of multidimensional techniques and the acquired knowledge of topographic effects in the images can be used in photogeologic interpretations.

  19. Collective search by mobile robots using alpha-beta coordination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldsmith, S.Y.; Robinett, R. III

    1998-04-01

    One important application of mobile robots is searching a geographical region to locate the origin of a specific sensible phenomenon. Mapping mine fields, extraterrestrial and undersea exploration, the location of chemical and biological weapons, and the location of explosive devices are just a few potential applications. Teams of robotic bloodhounds have a simple common goal; to converge on the location of the source phenomenon, confirm its intensity, and to remain aggregated around it until directed to take some other action. In cases where human intervention through teleoperation is not possible, the robot team must be deployed in a territory withoutmore » supervision, requiring an autonomous decentralized coordination strategy. This paper presents the alpha beta coordination strategy, a family of collective search algorithms that are based on dynamic partitioning of the robotic team into two complementary social roles according to a sensor based status measure. Robots in the alpha role are risk takers, motivated to improve their status by exploring new regions of the search space. Robots in the beta role are motivated to improve but are conservative, and tend to remain aggregated and stationary until the alpha robots have identified better regions of the search space. Roles are determined dynamically by each member of the team based on the status of the individual robot relative to the current state of the collective. Partitioning the robot team into alpha and beta roles results in a balance between exploration and exploitation, and can yield collective energy savings and improved resistance to sensor noise and defectors. Alpha robots waste energy exploring new territory, and are more sensitive to the effects of ambient noise and to defectors reporting inflated status. Beta robots conserve energy by moving in a direct path to regions of confirmed high status.« less

  20. Effect of motor dynamics on nonlinear feedback robot arm control

    NASA Technical Reports Server (NTRS)

    Tarn, Tzyh-Jong; Li, Zuofeng; Bejczy, Antal K.; Yun, Xiaoping

    1991-01-01

    A nonlinear feedback robot controller that incorporates the robot manipulator dynamics and the robot joint motor dynamics is proposed. The manipulator dynamics and the motor dynamics are coupled to obtain a third-order-dynamic model, and differential geometric control theory is applied to produce a linearized and decoupled robot controller. The derived robot controller operates in the robot task space, thus eliminating the need for decomposition of motion commands into robot joint space commands. Computer simulations are performed to verify the feasibility of the proposed robot controller. The controller is further experimentally evaluated on the PUMA 560 robot arm. The experiments show that the proposed controller produces good trajectory tracking performances and is robust in the presence of model inaccuracies. Compared with a nonlinear feedback robot controller based on the manipulator dynamics only, the proposed robot controller yields conspicuously improved performance.

  1. Drive Control System for Pipeline Crawl Robot Based on CAN Bus

    NASA Astrophysics Data System (ADS)

    Chen, H. J.; Gao, B. T.; Zhang, X. H.; Deng2, Z. Q.

    2006-10-01

    Drive control system plays important roles in pipeline robot. In order to inspect the flaw and corrosion of seabed crude oil pipeline, an original mobile pipeline robot with crawler drive unit, power and monitor unit, central control unit, and ultrasonic wave inspection device is developed. The CAN bus connects these different function units and presents a reliable information channel. Considering the limited space, a compact hardware system is designed based on an ARM processor with two CAN controllers. With made-to-order CAN protocol for the crawl robot, an intelligent drive control system is developed. The implementation of the crawl robot demonstrates that the presented drive control scheme can meet the motion control requirements of the underwater pipeline crawl robot.

  2. Lameness detection in dairy cattle: single predictor v. multivariate analysis of image-based posture processing and behaviour and performance sensing.

    PubMed

    Van Hertem, T; Bahr, C; Schlageter Tello, A; Viazzi, S; Steensels, M; Romanini, C E B; Lokhorst, C; Maltz, E; Halachmi, I; Berckmans, D

    2016-09-01

    The objective of this study was to evaluate if a multi-sensor system (milk, activity, body posture) was a better classifier for lameness than the single-sensor-based detection models. Between September 2013 and August 2014, 3629 cow observations were collected on a commercial dairy farm in Belgium. Human locomotion scoring was used as reference for the model development and evaluation. Cow behaviour and performance was measured with existing sensors that were already present at the farm. A prototype of three-dimensional-based video recording system was used to quantify automatically the back posture of a cow. For the single predictor comparisons, a receiver operating characteristics curve was made. For the multivariate detection models, logistic regression and generalized linear mixed models (GLMM) were developed. The best lameness classification model was obtained by the multi-sensor analysis (area under the receiver operating characteristics curve (AUC)=0.757±0.029), containing a combination of milk and milking variables, activity and gait and posture variables from videos. Second, the multivariate video-based system (AUC=0.732±0.011) performed better than the multivariate milk sensors (AUC=0.604±0.026) and the multivariate behaviour sensors (AUC=0.633±0.018). The video-based system performed better than the combined behaviour and performance-based detection model (AUC=0.669±0.028), indicating that it is worthwhile to consider a video-based lameness detection system, regardless the presence of other existing sensors in the farm. The results suggest that Θ2, the feature variable for the back curvature around the hip joints, with an AUC of 0.719 is the best single predictor variable for lameness detection based on locomotion scoring. In general, this study showed that the video-based back posture monitoring system is outperforming the behaviour and performance sensing techniques for locomotion scoring-based lameness detection. A GLMM with seven specific variables (walking speed, back posture measurement, daytime activity, milk yield, lactation stage, milk peak flow rate and milk peak conductivity) is the best combination of variables for lameness classification. The accuracy on four-level lameness classification was 60.3%. The accuracy improved to 79.8% for binary lameness classification. The binary GLMM obtained a sensitivity of 68.5% and a specificity of 87.6%, which both exceed the sensitivity (52.1%±4.7%) and specificity (83.2%±2.3%) of the multi-sensor logistic regression model. This shows that the repeated measures analysis in the GLMM, taking into account the individual history of the animal, outperforms the classification when thresholds based on herd level (a statistical population) are used.

  3. Underwater (UW) Unexploded Ordnance (UXO) Multi-Sensor Data Base (MSDB) Collection

    DTIC Science & Technology

    2009-07-01

    11 FIGURE 6 RTG SENSOR. FOUR SENSOR TRIADS ARE SHOWN, EACH WITH A 3-AXIS FLUXGATE MAGNETOMETER ...used by RTG to measure the gradients. Each triad includes a 3-axis fluxgate magnetometer and a set of feedback coils. The outputs of three triad...each with a 3-axis fluxgate magnetometer (internal, not clearly visible) and a set of 3 feedback coils. The upper triad 3-axis magnetometer

  4. Surveillance and reconnaissance ground system architecture

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2001-12-01

    Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.

  5. Aging time and brand determination of pasteurized milk using a multisensor e-nose combined with a voltammetric e-tongue.

    PubMed

    Bougrini, Madiha; Tahri, Khalid; Haddi, Zouhair; El Bari, Nezha; Llobet, Eduard; Jaffrezic-Renault, Nicole; Bouchikhi, Benachir

    2014-12-01

    A combined approach based on a multisensor system to get additional chemical information from liquid samples through the analysis of the solution and its headspace is illustrated and commented. In the present work, innovative analytical techniques, such as a hybrid e-nose and a voltammetric e-tongue were elaborated to differentiate between different pasteurized milk brands and for the exact recognition of their storage days through the data fusion technique of the combined system. The Principal Component Analysis (PCA) has shown an acceptable discrimination of the pasteurized milk brands on the first day of storage, when the two instruments were used independently. Contrariwise, PCA indicated that no clear storage day's discrimination can be drawn when the two instruments are applied separately. Mid-level of abstraction data fusion approach has demonstrated that results obtained by the data fusion approach outperformed the classification results of the e-nose and e-tongue taken individually. Furthermore, the Support Vector Machine (SVM) supervised method was applied to the new subset and confirmed that all storage days were correctly identified. This study can be generalized to several beverage and food products where their quality is based on the perception of odor and flavor. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A Gap-Filling Procedure for Hydrologic Data Based on Kalman Filtering and Expectation Maximization: Application to Data from the Wireless Sensor Networks of the Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Coogan, A.; Avanzi, F.; Akella, R.; Conklin, M. H.; Bales, R. C.; Glaser, S. D.

    2017-12-01

    Automatic meteorological and snow stations provide large amounts of information at dense temporal resolution, but data quality is often compromised by noise and missing values. We present a new gap-filling and cleaning procedure for networks of these stations based on Kalman filtering and expectation maximization. Our method utilizes a multi-sensor, regime-switching Kalman filter to learn a latent process that captures dependencies between nearby stations and handles sharp changes in snowfall rate. Since the latent process is inferred using observations across working stations in the network, it can be used to fill in large data gaps for a malfunctioning station. The procedure was tested on meteorological and snow data from Wireless Sensor Networks (WSN) in the American River basin of the Sierra Nevada. Data include air temperature, relative humidity, and snow depth from dense networks of 10 to 12 stations within 1 km2 swaths. Both wet and dry water years have similar data issues. Data with artificially created gaps was used to quantify the method's performance. Our multi-sensor approach performs better than a single-sensor one, especially with large data gaps, as it learns and exploits the dominant underlying processes in snowpack at each site.

  7. CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.

    PubMed

    Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V

    2010-12-01

    We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.

  8. Integrated multisensor perimeter detection systems

    NASA Astrophysics Data System (ADS)

    Kent, P. J.; Fretwell, P.; Barrett, D. J.; Faulkner, D. A.

    2007-10-01

    The report describes the results of a multi-year programme of research aimed at the development of an integrated multi-sensor perimeter detection system capable of being deployed at an operational site. The research was driven by end user requirements in protective security, particularly in threat detection and assessment, where effective capability was either not available or prohibitively expensive. Novel video analytics have been designed to provide robust detection of pedestrians in clutter while new radar detection and tracking algorithms provide wide area day/night surveillance. A modular integrated architecture based on commercially available components has been developed. A graphical user interface allows intuitive interaction and visualisation with the sensors. The fusion of video, radar and other sensor data provides the basis of a threat detection capability for real life conditions. The system was designed to be modular and extendable in order to accommodate future and legacy surveillance sensors. The current sensor mix includes stereoscopic video cameras, mmWave ground movement radar, CCTV and a commercially available perimeter detection cable. The paper outlines the development of the system and describes the lessons learnt after deployment in a pilot trial.

  9. OLDER ADULTS’ PREFERENCES FOR AND ACCEPTANCE OF ROBOT ASSISTANCE FOR EVERYDAY LIVING TASKS

    PubMed Central

    Smarr, Cory-Ann; Prakash, Akanksha; Beer, Jenay M.; Mitzner, Tracy L.; Kemp, Charles C.; Rogers, Wendy A.

    2014-01-01

    Many older adults value their independence and prefer to age in place. Robots can be designed to assist older people with performing everyday living tasks and maintaining their independence at home. Yet, there is a scarcity of knowledge regarding older adults’ attitudes toward robots and their preferences for robot assistance. Twenty-one older adults (M = 80.25 years old, SD = 7.19) completed questionnaires and participated in structured group interviews investigating their openness to and preferences for assistance from a mobile manipulator robot. Although the older adults were generally open to robot assistance for performing home-based tasks, they were selective in their views. Older adults preferred robot assistance over human assistance for many instrumental (e.g., housekeeping, laundry, medication reminders) and enhanced activities of daily living (e.g., new learning, hobbies). However, older adults were less open to robot assistance for some activities of daily living (e.g., shaving, hair care). Results from this study provide insight into older adults’ attitudes toward robot assistance with home-based everyday living tasks. PMID:25284971

  10. Children’s Imaginaries of Human-Robot Interaction in Healthcare

    PubMed Central

    2018-01-01

    This paper analyzes children’s imaginaries of Human-Robots Interaction (HRI) in the context of social robots in healthcare, and it explores ethical and social issues when designing a social robot for a children’s hospital. Based on approaches that emphasize the reciprocal relationship between society and technology, the analytical force of imaginaries lies in their capacity to be embedded in practices and interactions as well as to affect the construction and applications of surrounding technologies. The study is based on a participatory process carried out with six-year-old children for the design of a robot. Imaginaries of HRI are analyzed from a care-centered approach focusing on children’s values and practices as related to their representation of care. The conceptualization of HRI as an assemblage of interactions, the prospective bidirectional care relationships with robots, and the engagement with the robot as an entity of multiple potential robots are the major findings of this study. The study shows the potential of studying imaginaries of HRI, and it concludes that their integration in the final design of robots is a way of including ethical values in it. PMID:29757221

  11. Optical assembly of bio-hybrid micro-robots.

    PubMed

    Barroso, Álvaro; Landwerth, Shirin; Woerdemann, Mike; Alpmann, Christina; Buscher, Tim; Becker, Maike; Studer, Armido; Denz, Cornelia

    2015-04-01

    The combination of micro synthetic structures with bacterial flagella motors represents an actual trend for the construction of self-propelled micro-robots. The development of methods for fabrication of these bacteria-based robots is a first crucial step towards the realization of functional miniature and autonomous moving robots. We present a novel scheme based on optical trapping to fabricate living micro-robots. By using holographic optical tweezers that allow three-dimensional manipulation in real time, we are able to arrange the building blocks that constitute the micro-robot in a defined way. We demonstrate exemplarily that our method enables the controlled assembly of living micro-robots consisting of a rod-shaped prokaryotic bacterium and a single elongated zeolite L crystal, which are used as model of the biological and abiotic components, respectively. We present different proof-of-principle approaches for the site-selective attachment of the bacteria on the particle surface. The propulsion of the optically assembled micro-robot demonstrates the potential of the proposed method as a powerful strategy for the fabrication of bio-hybrid micro-robots.

  12. Children's Imaginaries of Human-Robot Interaction in Healthcare.

    PubMed

    Vallès-Peris, Núria; Angulo, Cecilio; Domènech, Miquel

    2018-05-12

    This paper analyzes children’s imaginaries of Human-Robots Interaction (HRI) in the context of social robots in healthcare, and it explores ethical and social issues when designing a social robot for a children’s hospital. Based on approaches that emphasize the reciprocal relationship between society and technology, the analytical force of imaginaries lies in their capacity to be embedded in practices and interactions as well as to affect the construction and applications of surrounding technologies. The study is based on a participatory process carried out with six-year-old children for the design of a robot. Imaginaries of HRI are analyzed from a care-centered approach focusing on children’s values and practices as related to their representation of care. The conceptualization of HRI as an assemblage of interactions, the prospective bidirectional care relationships with robots, and the engagement with the robot as an entity of multiple potential robots are the major findings of this study. The study shows the potential of studying imaginaries of HRI, and it concludes that their integration in the final design of robots is a way of including ethical values in it.

  13. Serendipitous Offline Learning in a Neuromorphic Robot.

    PubMed

    Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg

    2016-01-01

    We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.

  14. Robots in Space -Psychological Aspects

    NASA Technical Reports Server (NTRS)

    Sipes, Walter E.

    2006-01-01

    A viewgraph presentation on the psychological aspects of developing robots to perform routine operations associated with monitoring, inspection, maintenance and repair in space is shown. The topics include: 1) Purpose; 2) Vision; 3) Current Robots in Space; 4) Ground Based Robots; 5) AERCam; 6) Rotating Bladder Robot (ROBLR); 7) DART; 8) Robonaut; 9) Full Immersion Telepresence Testbed; 10) ERA; and 11) Psychological Aspects

  15. Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.

    PubMed

    Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers.

  16. Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots

    PubMed Central

    Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers. PMID:26999614

  17. Kinematics optimization and static analysis of a modular continuum robot used for minimally invasive surgery.

    PubMed

    Qi, Fei; Ju, Feng; Bai, Dong Ming; Chen, Bai

    2018-02-01

    For the outstanding compliance and dexterity of continuum robot, it is increasingly used in minimally invasive surgery. The wide workspace, high dexterity and strong payload capacity are essential to the continuum robot. In this article, we investigate the workspace of a cable-driven continuum robot that we proposed. The influence of section number on the workspace is discussed when robot is operated in narrow environment. Meanwhile, the structural parameters of this continuum robot are optimized to achieve better kinematic performance. Moreover, an indicator based on the dexterous solid angle for evaluating the dexterity of robot is introduced and the distal end dexterity is compared for the three-section continuum robot with different range of variables. Results imply that the wider range of variables achieve the better dexterity. Finally, the static model of robot based on the principle of virtual work is derived to analyze the relationship between the bending shape deformation and the driven force. The simulations and experiments for plane and spatial motions are conducted to validate the feasibility of model, respectively. Results of this article can contribute to the real-time control and movement and can be a design reference for cable-driven continuum robot.

  18. A Reconfigurable Omnidirectional Soft Robot Based on Caterpillar Locomotion.

    PubMed

    Zou, Jun; Lin, Yangqiao; Ji, Chen; Yang, Huayong

    2018-04-01

    A pneumatically powered, reconfigurable omnidirectional soft robot based on caterpillar locomotion is described. The robot is composed of nine modules arranged as a three by three matrix and the length of this matrix is 154 mm. The robot propagates a traveling wave inspired by caterpillar locomotion, and it has all three degrees of freedom on a plane (X, Y, and rotation). The speed of the robot is about 18.5 m/h (two body lengths per minute) and it can rotate at a speed of 1.63°/s. The modules have neodymium-iron-boron (NdFeB) magnets embedded and can be easily replaced or combined into other configurations. Two different configurations are presented to demonstrate the possibilities of the modular structure: (1) by removing some modules, the omnidirectional robot can be reassembled into a form that can crawl in a pipe and (2) two omnidirectional robots can crawl close to each other and be assembled automatically into a bigger omnidirectional robot. Omnidirectional motion is important for soft robots to explore unstructured environments. The modular structure gives the soft robot the ability to cope with the challenges of different environments and tasks.

  19. Soldier-Based Assessment of a Dual-Row Tactor Display during Simultaneous Navigational and Robot-Monitoring Tasks

    DTIC Science & Technology

    2015-08-01

    Navigational and Robot -Monitoring Tasks by Gina Pomranky-Hartnett, Linda R Elliott, Bruce JP Mortimer, Greg R Mort, Rodger A Pettitt, and Gary A...Tactor Display during Simultaneous Navigational and Robot -Monitoring Tasks by Gina Pomranky-Hartnett, Linda R Elliott, and Rodger A Pettitt...2014–31 March 2015 4. TITLE AND SUBTITLE Soldier-Based Assessment of a Dual-Row Tactor Display during Simultaneous Navigational and Robot -Monitoring

  20. Validity evidence for procedural competency in virtual reality robotic simulation, establishing a credible pass/fail standard for the vaginal cuff closure procedure.

    PubMed

    Hovgaard, Lisette Hvid; Andersen, Steven Arild Wuyts; Konge, Lars; Dalsgaard, Torur; Larsen, Christian Rifbjerg

    2018-03-30

    The use of robotic surgery for minimally invasive procedures has increased considerably over the last decade. Robotic surgery has potential advantages compared to laparoscopic surgery but also requires new skills. Using virtual reality (VR) simulation to facilitate the acquisition of these new skills could potentially benefit training of robotic surgical skills and also be a crucial step in developing a robotic surgical training curriculum. The study's objective was to establish validity evidence for a simulation-based test for procedural competency for the vaginal cuff closure procedure that can be used in a future simulation-based, mastery learning training curriculum. Eleven novice gynaecological surgeons without prior robotic experience and 11 experienced gynaecological robotic surgeons (> 30 robotic procedures) were recruited. After familiarization with the VR simulator, participants completed the module 'Guided Vaginal Cuff Closure' six times. Validity evidence was investigated for 18 preselected simulator metrics. The internal consistency was assessed using Cronbach's alpha and a composite score was calculated based on metrics with significant discriminative ability between the two groups. Finally, a pass/fail standard was established using the contrasting groups' method. The experienced surgeons significantly outperformed the novice surgeons on 6 of the 18 metrics. The internal consistency was 0.58 (Cronbach's alpha). The experienced surgeons' mean composite score for all six repetitions were significantly better than the novice surgeons' (76.1 vs. 63.0, respectively, p < 0.001). A pass/fail standard of 75/100 was established. Four novice surgeons passed this standard (false positives) and three experienced surgeons failed (false negatives). Our study has gathered validity evidence for a simulation-based test for procedural robotic surgical competency in the vaginal cuff closure procedure and established a credible pass/fail standard for future proficiency-based training.

  1. Classifying a Person's Degree of Accessibility From Natural Body Language During Social Human-Robot Interactions.

    PubMed

    McColl, Derek; Jiang, Chuan; Nejat, Goldie

    2017-02-01

    For social robots to be successfully integrated and accepted within society, they need to be able to interpret human social cues that are displayed through natural modes of communication. In particular, a key challenge in the design of social robots is developing the robot's ability to recognize a person's affective states (emotions, moods, and attitudes) in order to respond appropriately during social human-robot interactions (HRIs). In this paper, we present and discuss social HRI experiments we have conducted to investigate the development of an accessibility-aware social robot able to autonomously determine a person's degree of accessibility (rapport, openness) toward the robot based on the person's natural static body language. In particular, we present two one-on-one HRI experiments to: 1) determine the performance of our automated system in being able to recognize and classify a person's accessibility levels and 2) investigate how people interact with an accessibility-aware robot which determines its own behaviors based on a person's speech and accessibility levels.

  2. Software for project-based learning of robot motion planning

    NASA Astrophysics Data System (ADS)

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-12-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can be explained in a simplified two-dimensional setting, but this masks many of the subtleties and complexities of the underlying problem. We have developed software for project-based learning of motion planning that enables deep learning. The projects that we have developed allow advanced undergraduate students and graduate students to reflect on the performance of existing textbook algorithms and their own variations on such algorithms. Formative assessment has been conducted at three institutions. The core of the software used for this teaching module is also used within the Robot Operating System, a widely adopted platform by the robotics research community. This allows for transfer of knowledge and skills to robotics research projects involving a large variety robot hardware platforms.

  3. Emergent of Burden Sharing of Robots with Emotion Model

    NASA Astrophysics Data System (ADS)

    Kusano, Takuya; Nozawa, Akio; Ide, Hideto

    Cooperated multi robots system has much dominance in comparison with single robot system. Multi robots system is able to adapt to various circumstances and has a flexibility for variation of tasks. Robots are necessary that build a cooperative relations and acts as an organization to attain a purpose in multi robots system. Then, group behavior of insects which doesn't have advanced ability is observed. For example, ants called a sociality insect emerge systematic activities by the interaction with using a very simple way. Though ants make a communication with chemical matter, a human plans a communication by words and gestures. In this paper, we paid attention to the interaction based on psychological viewpoint. And a human's emotion model was used for the parameter which became a base of the motion planning of robots. These robots were made to do both-way action in test field with obstacle. As a result, a burden sharing like guide or carrier was seen even though those had a simple setup.

  4. Tele-rehabilitation using in-house wearable ankle rehabilitation robot.

    PubMed

    Jamwal, Prashant K; Hussain, Shahid; Mir-Nasiri, Nazim; Ghayesh, Mergen H; Xie, Sheng Q

    2018-01-01

    This article explores wide-ranging potential of the wearable ankle robot for in-house rehabilitation. The presented robot has been conceptualized following a brief analysis of the existing technologies, systems, and solutions for in-house physical ankle rehabilitation. Configuration design analysis and component selection for ankle robot have been discussed as part of the conceptual design. The complexities of human robot interaction are closely encountered while maneuvering a rehabilitation robot. We present a fuzzy logic-based controller to perform the required robot-assisted ankle rehabilitation treatment. Designs of visual haptic interfaces have also been discussed, which will make the treatment interesting, and the subject will be motivated to exert more and regain lost functions rapidly. The complex nature of web-based communication between user and remotely sitting physiotherapy staff has also been discussed. A high-level software architecture appended with robot ensures user-friendly operations. This software is made up of three important components: patient-related database, graphical user interface (GUI), and a library of exercises creating virtual reality-specifically developed for ankle rehabilitation.

  5. Task path planning, scheduling and learning for free-ranging robot systems

    NASA Technical Reports Server (NTRS)

    Wakefield, G. Steve

    1987-01-01

    The development of robotics applications for space operations is often restricted by the limited movement available to guided robots. Free ranging robots can offer greater flexibility than physically guided robots in these applications. Presented here is an object oriented approach to path planning and task scheduling for free-ranging robots that allows the dynamic determination of paths based on the current environment. The system also provides task learning for repetitive jobs. This approach provides a basis for the design of free-ranging robot systems which are adaptable to various environments and tasks.

  6. Development of the first force-controlled robot for otoneurosurgery.

    PubMed

    Federspil, Philipp A; Geisthoff, Urban W; Henrich, Dominik; Plinkert, Peter K

    2003-03-01

    In some surgical specialties (eg, orthopedics), robots are already used in the operating room for bony milling work. Otological surgery and otoneurosurgery may also greatly benefit from the enhanced precision of robotics. Experimental study on robotic milling of oak wood and human temporal bone specimen. A standard industrial robot with a six-degrees-of-freedom serial kinematics was used, with force feedback to proportionally control the robot speed. Different milling modes and characteristic path parameters were evaluated to generate milling paths based on computer-aided design (CAD) geometry data of a cochlear implant and an implantable hearing system. The best-suited strategy proved to be the spiral horizontal milling mode with the burr held perpendicular to the temporal bone surface. To reduce groove height, the distance between paths should equal half the radius of the cutting burr head. Because of the vibration of the robot's own motors, a high oscillation of the SD of forces was encountered. This oscillation dropped drastically to nearly 0 Newton (N) when the burr head made contact with the dura mater, because of its damping characteristics. The cutting burr could be kept in contact with the dura mater for an extended period without damaging it, because of the burr's blunt head form. The robot moved the burr smoothly according to the encountered resistances. The study reports the first development of a functional robotic milling procedure for otoneurosurgery with force-based speed control. Future plans include implementation of ultrasound-based local navigation and performance of robotic mastoidectomy.

  7. A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II

    NASA Astrophysics Data System (ADS)

    Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki

    The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.

  8. Design of multifunction anti-terrorism robotic system based on police dog

    NASA Astrophysics Data System (ADS)

    You, Bo; Liu, Suju; Xu, Jun; Li, Dongjie

    2007-11-01

    Aimed at some typical constraints of police dogs and robots used in the areas of reconnaissance and counterterrorism currently, the multifunction anti-terrorism robotic system based on police dog has been introduced. The system is made up of two parts: portable commanding device and police dog robotic system. The portable commanding device consists of power supply module, microprocessor module, LCD display module, wireless data receiving and dispatching module and commanding module, which implements the remote control to the police dogs and takes real time monitor to the video and images. The police dog robotic system consists of microprocessor module, micro video module, wireless data transmission module, power supply module and offence weapon module, which real time collects and transmits video and image data of the counter-terrorism sites, and gives military attack based on commands. The system combines police dogs' biological intelligence with micro robot. Not only does it avoid the complexity of general anti-terrorism robots' mechanical structure and the control algorithm, but it also widens the working scope of police dog, which meets the requirements of anti-terrorism in the new era.

  9. Crew/Robot Coordinated Planetary EVA Operations at a Lunar Base Analog Site

    NASA Technical Reports Server (NTRS)

    Diftler, M. A.; Ambrose, R. O.; Bluethmann, W. J.; Delgado, F. J.; Herrera, E.; Kosmo, J. J.; Janoiko, B. A.; Wilcox, B. H.; Townsend, J. A.; Matthews, J. B.; hide

    2007-01-01

    Under the direction of NASA's Exploration Technology Development Program, robots and space suited subjects from several NASA centers recently completed a very successful demonstration of coordinated activities indicative of base camp operations on the lunar surface. For these activities, NASA chose a site near Meteor Crater, Arizona close to where Apollo Astronauts previously trained. The main scenario demonstrated crew returning from a planetary EVA (extra-vehicular activity) to a temporary base camp and entering a pressurized rover compartment while robots performed tasks in preparation for the next EVA. Scenario tasks included: rover operations under direct human control and autonomous modes, crew ingress and egress activities, autonomous robotic payload removal and stowage operations under both local control and remote control from Houston, and autonomous robotic navigation and inspection. In addition to the main scenario, participants had an opportunity to explore additional robotic operations: hill climbing, maneuvering heaving loads, gathering geo-logical samples, drilling, and tether operations. In this analog environment, the suited subjects and robots experienced high levels of dust, rough terrain, and harsh lighting.

  10. Does assist-as-needed upper limb robotic therapy promote participation in repetitive activity-based motor training in sub-acute stroke patients with severe paresis?

    PubMed

    Grosmaire, Anne-Gaëlle; Duret, Christophe

    2017-01-01

    Repetitive, active movement-based training promotes brain plasticity and motor recovery after stroke. Robotic therapy provides highly repetitive therapy that reduces motor impairment. However, the effect of assist-as-needed algorithms on patient participation and movement quality is not known. To analyze patient participation and motor performance during highly repetitive assist-as-needed upper limb robotic therapy in a retrospective study. Sixteen patients with sub-acute stroke carried out a 16-session upper limb robotic training program combined with usual care. The Fugl-Meyer Assessment (FMA) score was evaluated pre and post training. Robotic assistance parameters and Performance measures were compared within and across sessions. Robotic assistance did not change within-session and decreased between sessions during the training program. Motor performance did not decrease within-session and improved between sessions. Velocity-related assistance parameters improved more quickly than accuracy-related parameters. An assist-as-needed-based upper limb robotic training provided intense and repetitive rehabilitation and promoted patient participation and motor performance, facilitating motor recovery.

  11. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    NASA Astrophysics Data System (ADS)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  12. An earthworm-like robot using origami-ball structures

    NASA Astrophysics Data System (ADS)

    Fang, Hongbin; Zhang, Yetong; Wang, K. W.

    2017-04-01

    Earthworms possess extraordinary on-ground and underground mobility, which inspired researchers to mimic their morphology characteristics and locomotion mechanisms to develop crawling robots. One of the bottlenecks that constrain the development and wide-spread application of earthworm-like robots is the process of design, fabrication and assembly of the robot frameworks. Here we present a new earthworm-like robot design and prototype by exploring and utilizing origami ball structures. The origami ball is able to antagonistically output both axial and radial deformations, similar as an earthworm's body segment. The origami folding techniques also introduce many advantages to the robot development, including precise and low cost fabrication and high customizability. Starting from a flat polymer film, we adopt laser machining technique to engrave the crease pattern and manually fold the patterned flat film into an origami ball. Coupling the ball with a servomotor-driven linkage yields a robot segment. Connecting six segments in series, we obtain an earthworm-like origami robot prototype. The prototype is tested in a tube to evaluate its locomotion performance. It shows that the robot could crawl effectively in the tube, manifesting the feasibility of the origami-based design. In addition, test results indicate that the robot's locomotion could be tailored by employing different peristalsis-wave based gaits. The robot design and prototype reported in this paper could foster a new breed of crawling robots with simply design, fabrication, and assemble processes, and improved locomotion performance.

  13. Archaeological field survey automation: concurrent multisensor site mapping and automated analysis

    NASA Astrophysics Data System (ADS)

    Józefowicz, Mateusz; Sokolov, Oleksandr; Meszyński, Sebastian; Siemińska, Dominika; Kołosowski, Przemysław

    2016-04-01

    ABM SE develops mobile robots (rovers) used for analog research of Mars exploration missions. The rovers are all-terrain exploration platforms, carrying third-party payloads: scientific instrumentation. "Wisdom" ground penetrating radar for Exomars mission has been tested onboard, as well as electrical resistivity module and other devices. Robot has operated in various environments, such as Central European countryside, Dachstein ice caves or Sahara, Morocco (controlled remotely via satellite from Toruń, Poland. Currently ABM SE works on local and global positioning system for a Mars rover basing on image and IMU data. This is performed under a project from ESA. In the next Mars rover missions a Mars GIS model will be build, including an acquired GPR profile, DEM and regular image data, integrated into a concurrent 3D terrain model. It is proposed to use similar approach in surveys of archaeological sites, especially those, where solid architecture remains can be expected at shallow depths or being partially exposed. It is possible to deploy a rover that will concurrently map a selected site with GPR, 2D and 3D cameras to create a site model. The rover image processing algorithms are capable of automatic tracing of distinctive features (such as exposed structure remains on a desert ground, differences in color of the ground, etc.) and to mark regularities on a created map. It is also possible to correlate the 3D map with an aerial photo taken under any angle to achieve interpretation synergy. Currently the algorithms are an interpretation aid and their results must be confirmed by a human. The advantages of a rover over traditional approaches, such as a manual cart or a drone include: a) long hours of continuous work or work in unfavorable environment, such as high desert, frozen water pools or large areas, b) concurrent multisensory data acquisition, c) working from the ground level enables capturing of sites obstructed from the air (trees), d) it is possible to control the platform from a remote location via satellite, with only servicing person on the site and the survey team operating from their office, globally. The method is under development. The team contributing to the project includes also: Oleksii Sokolov, Michał Koepke, Krzysztof Rydel, Michał Stypczyński, Maciej Ślęk, Łukasz Zapała, Michał Dąbrowski.

  14. Low-Stroke Actuation for a Serial Robot

    NASA Technical Reports Server (NTRS)

    Ihrke, Chris A. (Inventor); Gao, Dalong (Inventor)

    2014-01-01

    A serial robot includes a base, first and second segments, a proximal joint joining the base to the first segment, and a distal joint. The distal joint that joins the segments is serially arranged and distal with respect to the proximal joint. The robot includes first and second actuators. A first tendon extends from the first actuator to the proximal joint and is selectively moveable via the first actuator. A second tendon extends from the second actuator to the distal joint and is selectively moveable via the second actuator. The robot includes a transmission having at least one gear element which assists rotation of the distal joint when an input force is applied to the proximal and/or distal joints by the first and/or second actuators. A robotic hand having the above robot is also disclosed, as is a robotic system having a torso, arm, and the above-described hand.

  15. Google glass-based remote control of a mobile robot

    NASA Astrophysics Data System (ADS)

    Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe

    2016-05-01

    In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.

  16. Advances in Mental Health Care: Five N = 1 Studies on the Effects of the Robot Seal Paro in Adults with Severe Intellectual Disabilities

    ERIC Educational Resources Information Center

    Wagemaker, Eline; Dekkers, Tycho J.; Agelink van Rentergem, Joost A.; Volkers, Karin M.; Huizenga, Hilde M.

    2017-01-01

    Background: The evidence base for psychological treatments for autism and mood disorders in people with moderate to severe intellectual disabilities (ID) is limited. Recent promising robot-based innovations in mental health care suggest that robot-based animal assisted therapy (AAT) could be useful to improve social skills and mood in people with…

  17. Pilot study on effectiveness of simulation for surgical robot design using manipulability.

    PubMed

    Kawamura, Kazuya; Seno, Hiroto; Kobayashi, Yo; Fujie, Masakatsu G

    2011-01-01

    Medical technology has advanced with the introduction of robot technology, which facilitates some traditional medical treatments that previously were very difficult. However, at present, surgical robots are used in limited medical domains because these robots are designed using only data obtained from adult patients and are not suitable for targets having different properties, such as children. Therefore, surgical robots are required to perform specific functions for each clinical case. In addition, the robots must exhibit sufficiently high movability and operability for each case. In the present study, we focused on evaluation of the mechanism and configuration of a surgical robot by a simulation based on movability and operability during an operation. We previously proposed the development of a simulator system that reproduces the conditions of a robot and a target in a virtual patient body to evaluate the operability of the surgeon during an operation. In the present paper, we describe a simple experiment to verify the condition of the surgical assisting robot during an operation. In this experiment, the operation imitating suturing motion was carried out in a virtual workspace, and the surgical robot was evaluated based on manipulability as an indicator of movability. As the result, it was confirmed that the robot was controlled with low manipulability of the left side manipulator during the suturing. This simulation system can verify the less movable condition of a robot before developing an actual robot. Our results show the effectiveness of this proposed simulation system.

  18. Intelligent robot control using an adaptive critic with a task control center and dynamic database

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.

    2006-10-01

    The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.

  19. Energy harvesting for dielectric elastomer sensing

    NASA Astrophysics Data System (ADS)

    Anderson, Iain A.; Illenberger, Patrin; O'Brien, Ben M.

    2016-04-01

    Soft and stretchy dielectric elastomer (DE) sensors can measure large strains on robotic devices and people. DE strain measurement requires electric energy to run the sensors. Energy is also required for information processing and telemetering of data to phone or computer. Batteries are expensive and recharging is inconvenient. One solution is to harvest energy from the strains that the sensor is exposed to. For this to work the harvester must also be wearable, soft, unobtrusive and profitable from the energy perspective; with more energy harvested than used for strain measurement. A promising way forward is to use the DE sensor as its own energy harvester. Our study indicates that it is feasible for a basic DE sensor to provide its own power to drive its own sensing signal. However telemetry and computation that are additional to this will require substantially more power than the sensing circuit. A strategy would involve keeping the number of Bluetooth data chirps low during the entire period of energy harvesting and to limit transmission to a fraction of the total time spent harvesting energy. There is much still to do to balance the energy budget. This will be a challenge but when we succeed it will open the door to autonomous DE multi-sensor systems without the requirement for battery recharge.

  20. Feasibility of using a humanoid robot to elicit communicational response in children with mild autism

    NASA Astrophysics Data System (ADS)

    Malik, Norjasween Abdul; Shamsuddin, Syamimi; Yussof, Hanafiah; Azfar Miskam, Mohd; Che Hamid, Aminullah

    2013-12-01

    Research evidences are accumulating with regards to the potential use of robots for the rehabilitation of children with autism. The purpose of this paper is to elaborate on the results of communicational response in two children with autism during interaction with the humanoid robot NAO. Both autistic subjects in this study have been diagnosed with mild autism. Following the outcome from our first pilot study; the aim of this current experiment is to explore the application of NAO robot to engage with a child and further teach about emotions through a game-centered and song-based approach. The experiment procedure involved interaction between humanoid robot NAO with each child through a series of four different modules. The observation items are based on ten items selected and referenced to GARS-2 (Gilliam Autism Rating Scale-second edition) and also input from clinicians and therapists. The results clearly indicated that both of the children showed optimistic response through the interaction. Negative responses such as feeling scared or shying away from the robot were not detected. Two-way communication between the child and robot in real time significantly gives positive impact in the responses towards the robot. To conclude, it is feasible to include robot-based interaction specifically to elicit communicational response as a part of the rehabilitation intervention of children with autism.

  1. Towards Optimal Platform-Based Robot Design for Ankle Rehabilitation: The State of the Art and Future Prospects

    PubMed Central

    Li, Hongsheng

    2018-01-01

    This review aims to compare existing robot-assisted ankle rehabilitation techniques in terms of robot design. Included studies mainly consist of selected papers in two published reviews involving a variety of robot-assisted ankle rehabilitation techniques. A free search was also made in Google Scholar and Scopus by using keywords “ankle∗,” and “robot∗,” and (“rehabilitat∗” or “treat∗”). The search is limited to English-language articles published between January 1980 and September 2016. Results show that existing robot-assisted ankle rehabilitation techniques can be classified into wearable exoskeleton and platform-based devices. Platform-based devices are mostly developed for the treatment of a variety of ankle musculoskeletal and neurological injuries, while wearable ones focus more on ankle-related gait training. In terms of robot design, comparative analysis indicates that an ideal ankle rehabilitation robot should have aligned rotation center as the ankle joint, appropriate workspace, and actuation torque, no matter how many degrees of freedom (DOFs) it has. Single-DOF ankle robots are mostly developed for specific applications, while multi-DOF devices are more suitable for comprehensive ankle rehabilitation exercises. Other factors including posture adjustability and sensing functions should also be considered to promote related clinical applications. An ankle rehabilitation robot with reconfigurability to maximize its functions will be a new research point towards optimal design, especially on parallel mechanisms. PMID:29736230

  2. Robot-based additive manufacturing for flexible die-modelling in incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Rieger, Michael; Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd

    2017-10-01

    The paper describes the application concept of additive manufactured dies to support the robot-based incremental sheet metal forming process (`Roboforming') for the production of sheet metal components in small batch sizes. Compared to the dieless kinematic-based generation of a shape by means of two cooperating industrial robots, the supporting robot models a die on the back of the metal sheet by using the robot-based fused layer manufacturing process (FLM). This tool chain is software-defined and preserves the high geometrical form flexibility of Roboforming while flexibly generating support structures adapted to the final part's geometry. Test series serve to confirm the feasibility of the concept by investigating the process challenges of the adhesion to the sheet surface and the general stability as well as the influence on the geometric accuracy compared to the well-known forming strategies.

  3. Miniaturized Airborne Imaging Central Server System

    NASA Technical Reports Server (NTRS)

    Sun, Xiuhong

    2011-01-01

    In recent years, some remote-sensing applications require advanced airborne multi-sensor systems to provide high performance reflective and emissive spectral imaging measurement rapidly over large areas. The key or unique problem of characteristics is associated with a black box back-end system that operates a suite of cutting-edge imaging sensors to collect simultaneously the high throughput reflective and emissive spectral imaging data with precision georeference. This back-end system needs to be portable, easy-to-use, and reliable with advanced onboard processing. The innovation of the black box backend is a miniaturized airborne imaging central server system (MAICSS). MAICSS integrates a complex embedded system of systems with dedicated power and signal electronic circuits inside to serve a suite of configurable cutting-edge electro- optical (EO), long-wave infrared (LWIR), and medium-wave infrared (MWIR) cameras, a hyperspectral imaging scanner, and a GPS and inertial measurement unit (IMU) for atmospheric and surface remote sensing. Its compatible sensor packages include NASA s 1,024 1,024 pixel LWIR quantum well infrared photodetector (QWIP) imager; a 60.5 megapixel BuckEye EO camera; and a fast (e.g. 200+ scanlines/s) and wide swath-width (e.g., 1,920+ pixels) CCD/InGaAs imager-based visible/near infrared reflectance (VNIR) and shortwave infrared (SWIR) imaging spectrometer. MAICSS records continuous precision georeferenced and time-tagged multisensor throughputs to mass storage devices at a high aggregate rate, typically 60 MB/s for its LWIR/EO payload. MAICSS is a complete stand-alone imaging server instrument with an easy-to-use software package for either autonomous data collection or interactive airborne operation. Advanced multisensor data acquisition and onboard processing software features have been implemented for MAICSS. With the onboard processing for real time image development, correction, histogram-equalization, compression, georeference, and data organization, fast aerial imaging applications, including the real time LWIR image mosaic for Google Earth, have been realized for NASA fs LWIR QWIP instrument. MAICSS is a significant improvement and miniaturization of current multisensor technologies. Structurally, it has a complete modular and solid-state design. Without rotating hard drives and other moving parts, it is operational at high altitudes and survivable in high-vibration environments. It is assembled from a suite of miniaturized, precision-machined, standardized, and stackable interchangeable embedded instrument modules. These stackable modules can be bolted together with the interconnection wires inside for the maximal simplicity and portability. Multiple modules are electronically interconnected as stacked. Alternatively, these dedicated modules can be flexibly distributed to fit the space constraints of a flying vehicle. As a flexibly configurable system, MAICSS can be tailored to interface a variety of multisensor packages. For example, with a 1,024x1,024 pixel LWIR and a 8,984x6,732 pixel EO payload, the complete MAICSS volume is approximately 7x9x11 in. (=18x23x28 cm), with a weight of 25 lb (=11.4 kg).

  4. Satellite Data Simulator Unit: A Multisensor, Multispectral Satellite Simulator Package

    NASA Technical Reports Server (NTRS)

    Masunaga, Hirohiko; Matsui, Toshihisa; Tao, Wei-Kuo; Hou, Arthur Y.; Kummerow, Christian D.; Nakajima, Teruyuki; Bauer, Peter; Olson, William S.; Sekiguchi, Miho; Nakajima, Teruyuki

    2010-01-01

    Several multisensor simulator packages are being developed by different research groups across the world. Such simulator packages [e.g., COSP , CRTM, ECSIM, RTTO, ISSARS (under development), and SDSU (this article), among others] share overall aims, although some are targeted more on particular satellite programs or specific applications (for research purposes or for operational use) than others. The SDSU or Satellite Data Simulator Unit is a general-purpose simulator composed of Fortran 90 codes and applicable to spaceborne microwave radiometer, radar, and visible/infrared imagers including, but not limited to, the sensors listed in a table. That shows satellite programs particularly suitable for multisensor data analysis: some are single satellite missions carrying two or more instruments, while others are constellations of satellites flying in formation. The TRMM and A-Train are ongoing satellite missions carrying diverse sensors that observe clouds and precipitation, and will be continued or augmented within the decade to come by future multisensor missions such as the GPM and Earth-CARE. The ultimate goals of these present and proposed satellite programs are not restricted to clouds and precipitation but are to better understand their interactions with atmospheric dynamics/chemistry and feedback to climate. The SDSU's applicability is not technically limited to hydrometeor measurements either, but may be extended to air temperature and humidity observations by tuning the SDSU to sounding channels. As such, the SDSU and other multisensor simulators would potentially contribute to a broad area of climate and atmospheric sciences. The SDSU is not optimized to any particular orbital geometry of satellites. The SDSU is applicable not only to low-Earth orbiting platforms as listed in Table 1, but also to geostationary meteorological satellites. Although no geosynchronous satellite carries microwave instruments at present or in the near future, the SDSU would be useful for future geostationary satellites with a microwave radiometer and/or a radar aboard, which could become more feasible as engineering challenges are met. In this short article, the SDSU algorithm architecture and potential applications are reviewed in brief.

  5. User-centric design of a personal assistance robot (FRASIER) for active aging.

    PubMed

    Padir, Taşkin; Skorinko, Jeanine; Dimitrov, Velin

    2015-01-01

    We present our preliminary results from the design process for developing the Worcester Polytechnic Institute's personal assistance robot, FRASIER, as an intelligent service robot for enabling active aging. The robot capabilities include vision-based object detection, tracking the user and help with carrying heavy items such as grocery bags or cafeteria trays. This work-in-progress report outlines our motivation and approach to developing the next generation of service robots for the elderly. Our main contribution in this paper is the development of a set of specifications based on the adopted user-centered design process, and realization of the prototype system designed to meet these specifications.

  6. Robotic surgery basic skills training: Evaluation of a pilot multidisciplinary simulation-based curriculum

    PubMed Central

    Foell, Kirsten; Finelli, Antonio; Yasufuku, Kazuhiro; Bernardini, Marcus Q.; Waddell, Thomas K.; Pace, Kenneth T.; Honey, R. John D.’A.; Lee, Jason Y.

    2013-01-01

    Purpose: Simulation-based training improves clinical skills, while minimizing the impact of the educational process on patient care. We present results of a pilot multidisciplinary, simulation-based robotic surgery basic skills training curriculum (BSTC) for robotic novices. Methods: A 4-week, simulation-based, robotic surgery BSTC was offered to the Departments of Surgery and Obstetrics & Gynecology (ObGyn) at the University of Toronto. The course consisted of various instructional strategies: didactic lecture, self-directed online-training modules, introductory hands-on training with the da Vinci robot (dVR) (Intuitive Surgical Inc., Sunnyvale, CA), and dedicated training on the da Vinci Skills Simulator (Intuitive Surgical Inc., Sunnyvale, CA) (dVSS). A third of trainees participated in competency-based dVSS training, all others engaged in traditional time-based training. Pre- and post-course skill testing was conducted on the dVR using 2 standardized skill tasks: ring transfer (RT) and needle passing (NP). Retention of skills was assessed at 5 months post-BSTC. Results: A total of 37 participants completed training. The mean task completion time and number of errors improved significantly post-course on both RT (180.6 vs. 107.4 sec, p < 0.01 and 3.5 vs. 1.3 sec, p < 0.01, respectively) and NP (197.1 vs. 154.1 sec, p < 0.01 and 4.5 vs. 1.8 sec, p = 0.04, respectively) tasks. No significant difference in performance was seen between specialties. Competency-based training was associated with significantly better post-course performance. The dVSS demonstrated excellent face validity. Conclusions: The implementation of a pilot multidisciplinary, simulation-based robotic surgery BSTC revealed significantly improved basic robotic skills among novice trainees, regardless of specialty or level of training. Competency-based training was associated with significantly better acquisition of basic robotic skills. PMID:24381662

  7. A Project-Based Biologically-Inspired Robotics Module

    ERIC Educational Resources Information Center

    Crowder, R. M.; Zauner, K.-P.

    2013-01-01

    The design of any robotic system requires input from engineers from a variety of technical fields. This paper describes a project-based module, "Biologically-Inspired Robotics," that is offered to Electronics and Computer Science students at the University of Southampton, U.K. The overall objective of the module is for student groups to…

  8. A Null Space Control of Two Wheels Driven Mobile Manipulator Using Passivity Theory

    NASA Astrophysics Data System (ADS)

    Shibata, Tsuyoshi; Murakami, Toshiyuki

    This paper describes a control strategy of null space motion of a two wheels driven mobile manipulator. Recently, robot is utilized in various industrial fields and it is preferable for the robot manipulator to have multiple degrees of freedom motion. Several studies of kinematics for null space motion have been proposed. However stability analysis of null space motion is not enough. Furthermore, these approaches apply to stable systems, but they do not apply unstable systems. Then, in this research, base of manipulator equips with two wheels driven mobile robot. This robot is called two wheels driven mobile manipulator, which becomes unstable system. In the proposed approach, a control design of null space uses passivity based stabilizing. A proposed controller is decided so that closed-loop system of robot dynamics satisfies passivity. This is passivity based control. Then, control strategy is that stabilizing of the robot system applies to work space observer based approach and null space control while keeping end-effector position. The validity of the proposed approach is verified by simulations and experiments of two wheels driven mobile manipulator.

  9. A focused bibliography on robotics

    NASA Astrophysics Data System (ADS)

    Mergler, H. W.

    1983-08-01

    The present bibliography focuses on eight robotics-related topics believed by the author to be of special interest to researchers in the field of industrial electronics: robots, sensors, kinematics, dynamics, control systems, actuators, vision, economics, and robot applications. This literature search was conducted through the 1970-present COMPENDEX data base, which provides world-wide coverage of nearly 3500 journals, conference proceedings and reports, and the 1969-1981 INSPEC data base, which is the largest for the English language in the fields of physics, electrotechnology, computers, and control.

  10. Behavior-based multi-robot collaboration for autonomous construction tasks

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew

    2005-01-01

    The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.

  11. Identification of two-phase flow regime based on electrical capacitance tomography and soft-sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Ming-fu; Hu, Xin-Yu; Shao, Yun; Luo, Bin-bin; Wang, Xin

    2008-10-01

    This article analyses nowadays in common use of football robots in China, intended to improve the football robots' hardware platform system's capability, and designed a football robot which based on DSP core controller, and combined Fuzzy-PID control algorithm. The experiment showed, because of the advantages of DSP, such as quickly operation, various of interfaces, low power dissipation etc. It has great improvement on the football robot's performance of movement, controlling precision, real-time performance.

  12. Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling Autonomous Robots

    DTIC Science & Technology

    2009-09-01

    To Appear in IEEE Robotics and Automation Magazine PREPRINT 1 Neuromodulation as a Robot Controller: A Brain Inspired Strategy for Controlling...Introduction We present a strategy for controlling autonomous robots that is based on principles of neuromodulation in the mammalian brain...object, ignore irrelevant distractions, and respond quickly and appropriately to the event [1]. There are separate neuromodulators that alter responses to

  13. FPGA-based fused smart sensor for dynamic and vibration parameter extraction in industrial robot links.

    PubMed

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).

  14. FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links

    PubMed Central

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345

  15. RadMAP: The Radiological Multi-sensor Analysis Platform

    NASA Astrophysics Data System (ADS)

    Bandstra, Mark S.; Aucott, Timothy J.; Brubaker, Erik; Chivers, Daniel H.; Cooper, Reynold J.; Curtis, Joseph C.; Davis, John R.; Joshi, Tenzing H.; Kua, John; Meyer, Ross; Negut, Victor; Quinlan, Michael; Quiter, Brian J.; Srinivasan, Shreyas; Zakhor, Avideh; Zhang, Richard; Vetter, Kai

    2016-12-01

    The variability of gamma-ray and neutron background during the operation of a mobile detector system greatly limits the ability of the system to detect weak radiological and nuclear threats. The natural radiation background measured by a mobile detector system is the result of many factors, including the radioactivity of nearby materials, the geometric configuration of those materials and the system, the presence of absorbing materials, and atmospheric conditions. Background variations tend to be highly non-Poissonian, making it difficult to set robust detection thresholds using knowledge of the mean background rate alone. The Radiological Multi-sensor Analysis Platform (RadMAP) system is designed to allow the systematic study of natural radiological background variations and to serve as a development platform for emerging concepts in mobile radiation detection and imaging. To do this, RadMAP has been used to acquire extensive, systematic background measurements and correlated contextual data that can be used to test algorithms and detector modalities at low false alarm rates. By combining gamma-ray and neutron detector systems with data from contextual sensors, the system enables the fusion of data from multiple sensors into novel data products. The data are curated in a common format that allows for rapid querying across all sensors, creating detailed multi-sensor datasets that are used to study correlations between radiological and contextual data, and develop and test novel techniques in mobile detection and imaging. In this paper we will describe the instruments that comprise the RadMAP system, the effort to curate and provide access to multi-sensor data, and some initial results on the fusion of contextual and radiological data.

  16. Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction.

    PubMed

    de Greeff, Joachim; Belpaeme, Tony

    2015-01-01

    Social learning is a powerful method for cultural propagation of knowledge and skills relying on a complex interplay of learning strategies, social ecology and the human propensity for both learning and tutoring. Social learning has the potential to be an equally potent learning strategy for artificial systems and robots in specific. However, given the complexity and unstructured nature of social learning, implementing social machine learning proves to be a challenging problem. We study one particular aspect of social machine learning: that of offering social cues during the learning interaction. Specifically, we study whether people are sensitive to social cues offered by a learning robot, in a similar way to children's social bids for tutoring. We use a child-like social robot and a task in which the robot has to learn the meaning of words. For this a simple turn-based interaction is used, based on language games. Two conditions are tested: one in which the robot uses social means to invite a human teacher to provide information based on what the robot requires to fill gaps in its knowledge (i.e. expression of a learning preference); the other in which the robot does not provide social cues to communicate a learning preference. We observe that conveying a learning preference through the use of social cues results in better and faster learning by the robot. People also seem to form a "mental model" of the robot, tailoring the tutoring to the robot's performance as opposed to using simply random teaching. In addition, the social learning shows a clear gender effect with female participants being responsive to the robot's bids, while male teachers appear to be less receptive. This work shows how additional social cues in social machine learning can result in people offering better quality learning input to artificial systems, resulting in improved learning performance.

  17. Energy-Saving Control of a Novel Hydraulic Drive System for Field Walking Robot

    NASA Astrophysics Data System (ADS)

    Fang, Delei; Shang, Jianzhong; Xue, Yong; Yang, Junhong; Wang, Zhuo

    2018-01-01

    To improve the efficiency of the hydraulic drive system in field walking robot, this paper proposed a novel hydraulic system based on two-stage pressure source. Based on the analysis of low efficiency of robot single-stage hydraulic system, the paper firstly introduces the concept and design of two-stage pressure source drive system. Then, the new hydraulic system energy-saving control is planned according to the characteristics of walking robot. The feasibility of the new hydraulic system is proved by the simulation of the walking robot squatting. Finally, the efficiencies of two types hydraulic system are calculated, indicating that the novel hydraulic system can increase the efficiency by 41.5%, which can contribute to enhance knowledge about hydraulic drive system for field walking robot.

  18. Efficient Control Law Simulation for Multiple Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.

    1998-10-06

    In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less

  19. The new era of robotic neck surgery: The universal application of the retroauricular approach.

    PubMed

    Byeon, Hyung Kwon; Koh, Yoon Woo

    2015-12-01

    Recent advances in technology has triggered the introduction of surgical robotics in the field of head and neck surgery and changed the landscape indefinitely. The advent of transoral robotic surgery and robotic thyroidectomy techniques has urged the extended applications of the robot to other neck surgeries including remote access surgeries. Based on earlier reports and our surgical experiences, this review will discuss in detail various robotic head and neck surgeries via retroauricular approach. © 2015 Wiley Periodicals, Inc.

  20. Embodied Computation: An Active-Learning Approach to Mobile Robotics Education

    ERIC Educational Resources Information Center

    Riek, L. D.

    2013-01-01

    This paper describes a newly designed upper-level undergraduate and graduate course, Autonomous Mobile Robots. The course employs active, cooperative, problem-based learning and is grounded in the fundamental computational problems in mobile robotics defined by Dudek and Jenkin. Students receive a broad survey of robotics through lectures, weekly…

  1. A Behavior-Based Approach for Educational Robotics Activities

    ERIC Educational Resources Information Center

    De Cristoforis, P.; Pedre, S.; Nitsche, M.; Fischer, T.; Pessacg, F.; Di Pietro, C.

    2013-01-01

    Educational robotics proposes the use of robots as a teaching resource that enables inexperienced students to approach topics in fields unrelated to robotics. In recent years, these activities have grown substantially in elementary and secondary school classrooms and also in outreach experiences to interest students in science, technology,…

  2. Analysis and Experimental Kinematics of a Skid-Steering Wheeled Robot Based on a Laser Scanner Sensor

    PubMed Central

    Wang, Tianmiao; Wu, Yao; Liang, Jianhong; Han, Chenhao; Chen, Jiao; Zhao, Qiteng

    2015-01-01

    Skid-steering mobile robots are widely used because of their simple mechanism and robustness. However, due to the complex wheel-ground interactions and the kinematic constraints, it is a challenge to understand the kinematics and dynamics of such a robotic platform. In this paper, we develop an analysis and experimental kinematic scheme for a skid-steering wheeled vehicle based-on a laser scanner sensor. The kinematics model is established based on the boundedness of the instantaneous centers of rotation (ICR) of treads on the 2D motion plane. The kinematic parameters (the ICR coefficient χ, the path curvature variable λ and robot speed v), including the effect of vehicle dynamics, are introduced to describe the kinematics model. Then, an exact but costly dynamic model is used and the simulation of this model’s stationary response for the vehicle shows a qualitative relationship for the specified parameters χ and λ. Moreover, the parameters of the kinematic model are determined based-on a laser scanner localization experimental analysis method with a skid-steering robotic platform, Pioneer P3-AT. The relationship between the ICR coefficient χ and two physical factors is studied, i.e., the radius of the path curvature λ and the robot speed v. An empirical function-based relationship between the ICR coefficient of the robot and the path parameters is derived. To validate the obtained results, it is empirically demonstrated that the proposed kinematics model significantly improves the dead-reckoning performance of this skid–steering robot. PMID:25919370

  3. M.I.N.G., Mars Investment for a New Generation: Robotic construction of a permanently manned Mars base

    NASA Technical Reports Server (NTRS)

    Amos, Jeff; Beeman, Randy; Brown, Susan; Calhoun, John; Hill, John; Howorth, Lark; Mcfaden, Clay; Nguyen, Paul; Reid, Philip; Rexrode, Stuart

    1989-01-01

    A basic procedure for robotically constructing a manned Mars base is outlined. The research procedure was divided into three areas: environment, robotics, and habitat. The base as designed will consist of these components: two power plants, communication facilities, a habitat complex, and a hangar, a garage, recreation and manufacturing facilities. The power plants will be self-contained nuclear fission reactors placed approx. 1 km from the base for safety considerations. The base communication system will use a combination of orbiting satellites and surface relay stations. This system is necessary for robotic contact with Phobos and any future communication requirements. The habitat complex will consist of six self-contained modules: core, biosphere, science, living quarters, galley/storage, and a sick bay which will be brought from Phobos. The complex will be set into an excavated hole and covered with approximately 0.5 m of sandbags to provide radiation protection for the astronauts. The recreation, hangar, garage, and manufacturing facilities will each be transformed from the four one-way landers. The complete complex will be built by autonomous, artificially intelligent robots. Robots incorporated into the design are as follows: Large Modular Construction Robots with detachable arms capable of large scale construction activities; Small Maneuverable Robotic Servicers capable of performing delicate tasks normally requiring a suited astronaut; and a trailer vehicle with modular type attachments to complete specific tasks; and finally, Mobile Autonomous Rechargeable Transporters capable of transferring air and water from the manufacturing facility to the habitat complex.

  4. M.I.N.G., Mars Investment for a New Generation: Robotic construction of a permanently manned Mars base

    NASA Astrophysics Data System (ADS)

    Amos, Jeff; Beeman, Randy; Brown, Susan; Calhoun, John; Hill, John; Howorth, Lark; McFaden, Clay; Nguyen, Paul; Reid, Philip; Rexrode, Stuart

    1989-05-01

    A basic procedure for robotically constructing a manned Mars base is outlined. The research procedure was divided into three areas: environment, robotics, and habitat. The base as designed will consist of these components: two power plants, communication facilities, a habitat complex, and a hanger, a garage, recreation and manufacturing facilities. The power plants will be self-contained nuclear fission reactors placed approx. 1 km from the base for safety considerations. The base communication system will use a combination of orbiting satellites and surface relay stations. This system is necessary for robotic contact with Phobos and any future communication requirements. The habitat complex will consist of six self-contained modules: core, biosphere, science, living quarters, galley/storage, and a sick bay which will be brought from Phobos. The complex will be set into an excavated hole and covered with approximately 0.5 m of sandbags to provide radiation protection for the astronauts. The recreation, hangar, garage, and manufacturing facilities will each be transformed from the four one-way landers. The complete complex will be built by autonomous, artificially intelligent robots. Robots incorporated into the design are as follows: Large Modular Construction Robots with detachable arms capable of large scale construction activities; Small Maneuverable Robotic Servicers capable of performing delicate tasks normally requiring a suited astronaut; and a trailer vehicle with modular type attachments to complete specific tasks; and finally, Mobile Autonomous Rechargeable Transporters capable of transferring air and water from the manufacturing facility to the habitat complex.

  5. Simulating the dynamic interaction of a robotic arm and the Space Shuttle remote manipulator system. M.S. Thesis - George Washington Univ., Dec. 1994

    NASA Technical Reports Server (NTRS)

    Garrahan, Steven L.; Tolson, Robert H.; Williams, Robert L., II

    1995-01-01

    Industrial robots are usually attached to a rigid base. Placing the robot on a compliant base introduces dynamic coupling between the two systems. The Vehicle Emulation System (VES) is a six DOF platform that is capable of modeling this interaction. The VES employs a force-torque sensor as the interface between robot and base. A computer simulation of the VES is presented. Each of the hardware and software components is described and Simulink is used as the programming environment. The simulation performance is compared with experimental results to validate accuracy. A second simulation which models the dynamic interaction of a robot and a flexible base acts as a comparison to the simulated motion of the VES. Results are presented that compare the simulated VES motion with the motion of the VES hardware using the same admittance model. The two computer simulations are compared to determine how well the VES is expected to emulate the desired motion. Simulation results are given for robots mounted to the end effector of the Space Shuttle Remote Manipulator System (SRMS). It is shown that for fast motions of the two robots studied, the SRMS experiences disturbances on the order of centimeters. Larger disturbances are possible if different manipulators are used.

  6. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  7. Object-based task-level control: A hierarchical control architecture for remote operation of space robots

    NASA Technical Reports Server (NTRS)

    Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.

    1994-01-01

    Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.

  8. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  9. Origami-based earthworm-like locomotion robots.

    PubMed

    Fang, Hongbin; Zhang, Yetong; Wang, K W

    2017-10-16

    Inspired by the morphology characteristics of the earthworms and the excellent deformability of origami structures, this research creates a novel earthworm-like locomotion robot through exploiting the origami techniques. In this innovation, appropriate actuation mechanisms are incorporated with origami ball structures into the earthworm-like robot 'body', and the earthworm's locomotion mechanism is mimicked to develop a gait generator as the robot 'centralized controller'. The origami ball, which is a periodic repetition of waterbomb units, could output significant bidirectional (axial and radial) deformations in an antagonistic way similar to the earthworm's body segment. Such bidirectional deformability can be strategically programmed by designing the number of constituent units. Experiments also indicate that the origami ball possesses two outstanding mechanical properties that are beneficial to robot development: one is the structural multistability in the axil direction that could contribute to the robot control implementation; and the other is the structural compliance in the radial direction that would increase the robot robustness and applicability. To validate the origami-based innovation, this research designs and constructs three robot segments based on different axial actuators: DC-motor, shape-memory-alloy springs, and pneumatic balloon. Performance evaluations reveal their merits and limitations, and to prove the concept, the DC-motor actuation is selected for building a six-segment robot prototype. Learning from earthworms' fundamental locomotion mechanism-retrograde peristalsis wave, seven gaits are automatically generated; controlled by which, the robot could achieve effective locomotion with qualitatively different modes and a wide range of average speeds. The outcomes of this research could lead to the development of origami locomotion robots with low fabrication costs, high customizability, light weight, good scalability, and excellent re-configurability.

  10. Concepts for the Design of a Diagnostic Device to Detect Malignancies in Human Tissues Final Report CRADA No. TSB-2023-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DaSilva, L.; Marion, J.; Chase, C.

    BioLuminate, Inc. planned to develop, produce and market a revolutionary diagnostic device for early breast cancer diagnosis. The device was originally invented by NASA; and exclusively licensed to BioLuminate for commercialization. At the time of the CRADA, eighty-five percent (85%) of all biopsies in the United States were found negative each year. The number of biopsies cost the health care system $23 billio n annually. A multi-sensor probe would allow surgeons to improve breast cancer scre ening and significantly reduce the number of biopsies. BioLuminate was developing an in-vivo system for the detection of cancer using a multi-sensor needle/probe. Themore » first system would be developed for the detection of breast cancer. LLNL, in collaboration with BioLuminate worked toward a detailed concept specification for the prototype multi-sensor needle/probe suitable for breast cancer analysis. BioLuminate in collaboration with LLNL, worked to develop a new version of the needle probe that would be the same size as needles commonly used to draw blood.« less

  11. Rehabilitation robotics for the upper extremity: review with new directions for orthopaedic disorders.

    PubMed

    Hakim, Renée M; Tunis, Brandon G; Ross, Michael D

    2017-11-01

    The focus of research using technological innovations such as robotic devices has been on interventions to improve upper extremity function in neurologic populations, particularly patients with stroke. There is a growing body of evidence describing rehabilitation programs using various types of supportive/assistive and/or resistive robotic and virtual reality-enhanced devices to improve outcomes for patients with neurologic disorders. The most promising approaches are task-oriented, based on current concepts of motor control/learning and practice-induced neuroplasticity. Based on this evidence, we describe application and feasibility of virtual reality-enhanced robotics integrated with current concepts in orthopaedic rehabilitation shifting from an impairment-based focus to inclusion of more intense, task-specific training for patients with upper extremity disorders, specifically emphasizing the wrist and hand. The purpose of this paper is to describe virtual reality-enhanced rehabilitation robotic devices, review evidence of application in patients with upper extremity deficits related to neurologic disorders, and suggest how this technology and task-oriented rehabilitation approach can also benefit patients with orthopaedic disorders of the wrist and hand. We will also discuss areas for further research and development using a task-oriented approach and a commercially available haptic robotic device to focus on training of grasp and manipulation tasks. Implications for Rehabilitation There is a growing body of evidence describing rehabilitation programs using various types of supportive/assistive and/or resistive robotic and virtual reality-enhanced devices to improve outcomes for patients with neurologic disorders. The most promising approaches using rehabilitation robotics are task-oriented, based on current concepts of motor control/learning and practice-induced neuroplasticity. Based on the evidence in neurologic populations, virtual reality-enhanced robotics may be integrated with current concepts in orthopaedic rehabilitation shifting from an impairment-based focus to inclusion of more intense, task-specific training for patients with UE disorders, specifically emphasizing the wrist and hand. Clinical application of a task-oriented approach may be accomplished using commercially available haptic robotic device to focus on training of grasp and manipulation tasks.

  12. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot.

    PubMed

    Duan, Xingguang; Gao, Liang; Wang, Yonggui; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, "kinematics + optics" hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.

  13. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    PubMed Central

    Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948

  14. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  15. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    PubMed Central

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790

  16. A taxonomy for user-healthcare robot interaction.

    PubMed

    Bzura, Conrad; Im, Hosung; Liu, Tammy; Malehorn, Kevin; Padir, Taskin; Tulu, Bengisu

    2012-01-01

    This paper evaluates existing taxonomies aimed at characterizing the interaction between robots and their users and modifies them for health care applications. The modifications are based on existing robot technologies and user acceptance of robotics. Characterization of the user, or in this case the patient, is a primary focus of the paper, as they present a unique new role as robot users. While therapeutic and monitoring-related applications for robots are still relatively uncommon, we believe they will begin to grow and thus it is important that the spurring relationship between robot and patient is well understood.

  17. New methods of measuring and calibrating robots

    NASA Astrophysics Data System (ADS)

    Janocha, Hartmut; Diewald, Bernd

    1995-10-01

    ISO 9283 and RIA R15.05 define industrial robot parameters which are applied to compare the efficiency of different robots. Hitherto, however, no suitable measurement systems have been available. ICAROS is a system which combines photogrammetrical procedures with an inertial navigation system. For the first time, this combination allows the high-precision static and dynamic measurement of the position as well as of the orientation of the robot endeffector. Thus, not only the measuring data for the determination of all industrial robot parameters can be acquired. By integration of a new over-all-calibration procedure, ICAROS also allows the reduction of the absolute robot pose errors to the range of its repeatability. The integration of both system components as well as measurement and calibration results are presented in this paper, using a six-axes robot as example. A further approach also presented here takes into consideration not only the individual robot errors but also the tolerances of workpieces. This allows the adjustment of off-line programs of robots based on inexact or idealized CAD data in any pose. Thus the robot position which is defined relative to the workpiece to be processed, is achieved as required. This includes the possibility to transfer teached robot programs to other devices without additional expenditure. The adjustment is based on the measurement of the robot position using two miniaturized CCD cameras mounted near the endeffector which are carried along by the robot during the correction phase. In the area viewed by both cameras, the robot position is determined in relation to prominent geometry elements, e.g. lines or holes. The scheduled data to be compared therewith can either be calculated in modern off-line programming systems during robot programming, or they can be determined at the so-called master robot if a transfer of the robot program is desired.

  18. Rolling bearing fault diagnosis based on information fusion using Dempster-Shafer evidence theory

    NASA Astrophysics Data System (ADS)

    Pei, Di; Yue, Jianhai; Jiao, Jing

    2017-10-01

    This paper presents a fault diagnosis method for rolling bearing based on information fusion. Acceleration sensors are arranged at different position to get bearing vibration data as diagnostic evidence. The Dempster-Shafer (D-S) evidence theory is used to fuse multi-sensor data to improve diagnostic accuracy. The efficiency of the proposed method is demonstrated by the high speed train transmission test bench. The results of experiment show that the proposed method in this paper improves the rolling bearing fault diagnosis accuracy compared with traditional signal analysis methods.

  19. A Unified Approach to Motion Control of Motion Robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1994-01-01

    This paper presents a simple on-line approach for motion control of mobile robots made up of a manipulator arm mounted on a mobile base. The proposed approach is equally applicable to nonholonomic mobile robots, such as rover-mounted manipulators and to holonomic mobile robots such as tracked robots or compound manipulators. The computational efficiency of the proposed control scheme makes it particularly suitable for real-time implementation.

  20. Method for six-legged robot stepping on obstacles by indirect force estimation

    NASA Astrophysics Data System (ADS)

    Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun

    2016-07-01

    Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.

Top