NASA Technical Reports Server (NTRS)
Mckee, James W.
1988-01-01
This final report describes the accomplishments of the General Purpose Intelligent Sensor Interface task of the Applications of Artificial Intelligence to Space Station grant for the period from October 1, 1987 through September 30, 1988. Portions of the First Biannual Report not revised will not be included but only referenced. The goal is to develop an intelligent sensor system that will simplify the design and development of expert systems using sensors of the physical phenomena as a source of data. This research will concentrate on the integration of image processing sensors and voice processing sensors with a computer designed for expert system development. The result of this research will be the design and documentation of a system in which the user will not need to be an expert in such areas as image processing algorithms, local area networks, image processor hardware selection or interfacing, television camera selection, voice recognition hardware selection, or analog signal processing. The user will be able to access data from video or voice sensors through standard LISP statements without any need to know about the sensor hardware or software.
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
Intelligent Network-Centric Sensors Development Program
2012-07-31
Image sensor Configuration: ; Cone 360 degree LWIR PFx Sensor: •■. Image sensor . Configuration: Image MWIR Configuration; Cone 360 degree... LWIR PFx Sensor: Video Configuration: Cone 360 degree SW1R, 2. Reasoning Process to Match Sensor Systems to Algorithms The ontological...effects of coherent imaging because of aberrations. Another reason is the specular nature of active imaging. Both contribute to the nonuniformity
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-01-01
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-02-09
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.
Automatic panoramic thermal integrated sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail A.; Tsui, Eddy K.; Gutin, Olga N.
2005-05-01
Historically, the US Army has recognized the advantages of panoramic imagers with high image resolution: increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The novel ViperViewTM high-resolution panoramic thermal imager is the heart of the Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) in support of the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to improve situational awareness (SA) in many defense and offensive operations, as well as serve as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The ViperView is as an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS sensor suite include ancillary sensors, advanced power management, and wakeup capability. This paper describes the development status of the APTIS system.
Intelligent imaging systems for automotive applications
NASA Astrophysics Data System (ADS)
Thompson, Chris; Huang, Yingping; Fu, Shan
2004-03-01
In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issues.
DOT National Transportation Integrated Search
2014-10-01
The goal of this project is to monitor traffic flow continuously with an innovative camera system composed of a custom : designed image sensor integrated circuit (IC) containing trapezoid pixel array and camera system that is capable of : intelligent...
Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn
2017-06-25
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.
Airborne net-centric multi-INT sensor control, display, fusion, and exploitation systems
NASA Astrophysics Data System (ADS)
Linne von Berg, Dale C.; Lee, John N.; Kruer, Melvin R.; Duncan, Michael D.; Olchowski, Fred M.; Allman, Eric; Howard, Grant
2004-08-01
The NRL Optical Sciences Division has initiated a multi-year effort to develop and demonstrate an airborne net-centric suite of multi-intelligence (multi-INT) sensors and exploitation systems for real-time target detection and targeting product dissemination. The goal of this Net-centric Multi-Intelligence Fusion Targeting Initiative (NCMIFTI) is to develop an airborne real-time intelligence gathering and targeting system that can be used to detect concealed, camouflaged, and mobile targets. The multi-INT sensor suite will include high-resolution visible/infrared (EO/IR) dual-band cameras, hyperspectral imaging (HSI) sensors in the visible-to-near infrared, short-wave and long-wave infrared (VNIR/SWIR/LWIR) bands, Synthetic Aperture Radar (SAR), electronics intelligence sensors (ELINT), and off-board networked sensors. Other sensors are also being considered for inclusion in the suite to address unique target detection needs. Integrating a suite of multi-INT sensors on a single platform should optimize real-time fusion of the on-board sensor streams, thereby improving the detection probability and reducing the false alarms that occur in reconnaissance systems that use single-sensor types on separate platforms, or that use independent target detection algorithms on multiple sensors. In addition to the integration and fusion of the multi-INT sensors, the effort is establishing an open-systems net-centric architecture that will provide a modular "plug and play" capability for additional sensors and system components and provide distributed connectivity to multiple sites for remote system control and exploitation.
Intelligent image processing for vegetation classification using multispectral LANDSAT data
NASA Astrophysics Data System (ADS)
Santos, Stewart R.; Flores, Jorge L.; Garcia-Torales, G.
2015-09-01
We propose an intelligent computational technique for analysis of vegetation imaging, which are acquired with multispectral scanner (MSS) sensor. This work focuses on intelligent and adaptive artificial neural network (ANN) methodologies that allow segmentation and classification of spectral remote sensing (RS) signatures, in order to obtain a high resolution map, in which we can delimit the wooded areas and quantify the amount of combustible materials present into these areas. This could provide important information to prevent fires and deforestation of wooded areas. The spectral RS input data, acquired by the MSS sensor, are considered in a random propagation remotely sensed scene with unknown statistics for each Thematic Mapper (TM) band. Performing high-resolution reconstruction and adding these spectral values with neighbor pixels information from each TM band, we can include contextual information into an ANN. The biggest challenge in conventional classifiers is how to reduce the number of components in the feature vector, while preserving the major information contained in the data, especially when the dimensionality of the feature space is high. Preliminary results show that the Adaptive Modified Neural Network method is a promising and effective spectral method for segmentation and classification in RS images acquired with MSS sensor.
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
NASA Astrophysics Data System (ADS)
Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.
2009-07-01
The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.
2011-09-01
Sensor ..........................................................................25 2. The Environment for Visualizing Images 4.7 (ENVI......DEM Digital Elevation Model ENVI Environment for Visualizing Images HADR Humanitarian and Disaster Relief IfSAR Interferometric Synthetic Aperture
NASA Astrophysics Data System (ADS)
Boldyreff, Anton S.; Bespalov, Dmitry A.; Adzhiev, Anatoly Kh.
2017-05-01
Methods of artificial intelligence are a good solution for weather phenomena forecasting. They allow to process a large amount of diverse data. Recirculation Neural Networks is implemented in the paper for the system of thunderstorm events prediction. Large amounts of experimental data from lightning sensors and electric field mills networks are received and analyzed. The average recognition accuracy of sensor signals is calculated. It is shown that Recirculation Neural Networks is a promising solution in the forecasting of thunderstorms and weather phenomena, characterized by the high efficiency of the recognition elements of the sensor signals, allows to compress images and highlight their characteristic features for subsequent recognition.
Overview of benefits, challenges, and requirements of wheeled-vehicle mounted infrared sensors
NASA Astrophysics Data System (ADS)
Miller, John Lester; Clayton, Paul; Olsson, Stefan F.
2013-06-01
Requirements for vehicle mounted infrared sensors, especially as imagers evolve to high definition (HD) format will be detailed and analyzed. Lessons learned from integrations of infrared sensors on armored vehicles, unarmored military vehicles and commercial automobiles will be discussed. Comparisons between sensors for driving and those for situation awareness, targeting and other functions will be presented. Conclusions will be drawn regarding future applications and installations. New business requirements for more advanced digital image processing algorithms in the sensor system will be discussed. Examples of these are smarter contrast/brightness adjustments algorithms, detail enhancement, intelligent blending (IR-Vis) modes, and augmented reality.
Design of an Intelligent Front-End Signal Conditioning Circuit for IR Sensors
NASA Astrophysics Data System (ADS)
de Arcas, G.; Ruiz, M.; Lopez, J. M.; Gutierrez, R.; Villamayor, V.; Gomez, L.; Montojo, Mª. T.
2008-02-01
This paper presents the design of an intelligent front-end signal conditioning system for IR sensors. The system has been developed as an interface between a PbSe IR sensor matrix and a TMS320C67x digital signal processor. The system architecture ensures its scalability so it can be used for sensors with different matrix sizes. It includes an integrator based signal conditioning circuit, a data acquisition converter block, and a FPGA based advanced control block that permits including high level image preprocessing routines such as faulty pixel detection and sensor calibration in the signal conditioning front-end. During the design phase virtual instrumentation technologies proved to be a very valuable tool for prototyping when choosing the best A/D converter type for the application. Development time was significantly reduced due to the use of this technology.
An Intelligent Fingerprint-Biometric Image Scrambling Scheme
NASA Astrophysics Data System (ADS)
Khan, Muhammad Khurram; Zhang, Jiashu
To obstruct the attacks, and to hamper with the liveness and retransmission issues of biometrics images, we have researched on the challenge/response-based biometrics scrambled image transmission. We proposed an intelligent biometrics sensor, which has computational power to receive challenges from the authentication server and generate response against the challenge with the encrypted biometric image. We utilized the FRT for biometric image encryption and used its scaling factors and random phase mask as the additional secret keys. In addition, we chaotically generated the random phase masks by a chaotic map to further improve the encryption security. Experimental and simulation results have shown that the presented system is secure, robust, and deters the risks of attacks of biometrics image transmission.
Multispectral Image Processing for Plants
NASA Technical Reports Server (NTRS)
Miles, Gaines E.
1991-01-01
The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
2004-10-01
The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.
De Momi, E; Ferrigno, G
2010-01-01
The robot and sensors integration for computer-assisted surgery and therapy (ROBOCAST) project (FP7-ICT-2007-215190) is co-funded by the European Union within the Seventh Framework Programme in the field of information and communication technologies. The ROBOCAST project focuses on robot- and artificial-intelligence-assisted keyhole neurosurgery (tumour biopsy and local drug delivery along straight or turning paths). The goal of this project is to assist surgeons with a robotic system controlled by an intelligent high-level controller (HLC) able to gather and integrate information from the surgeon, from diagnostic images, and from an array of on-field sensors. The HLC integrates pre-operative and intra-operative diagnostics data and measurements, intelligence augmentation, multiple-robot dexterity, and multiple sensory inputs in a closed-loop cooperating scheme including a smart interface for improved haptic immersion and integration. This paper, after the overall architecture description, focuses on the intelligent trajectory planner based on risk estimation and human criticism. The current status of development is reported, and first tests on the planner are shown by using a real image stack and risk descriptor phantom. The advantages of using a fuzzy risk description are given by the possibility of upgrading the knowledge on-field without the intervention of a knowledge engineer.
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
Conference on Space and Military Applications of Automation and Robotics
NASA Technical Reports Server (NTRS)
1988-01-01
Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.
NASA Astrophysics Data System (ADS)
Baskoro, Ario Sunar; Kabutomori, Masashi; Suga, Yasuo
An automatic welding system using Tungsten Inert Gas (TIG) welding with vision sensor for welding of aluminum pipe was constructed. This research studies the intelligent welding process of aluminum alloy pipe 6063S-T5 in fixed position and moving welding torch with the AC welding machine. The monitoring system consists of a vision sensor using a charge-coupled device (CCD) camera to monitor backside image of molten pool. The captured image was processed to recognize the edge of molten pool by image processing algorithm. Neural network model for welding speed control were constructed to perform the process automatically. From the experimental results it shows the effectiveness of the control system confirmed by good detection of molten pool and sound weld of experimental result.
Scalable sensor management for automated fusion and tactical reconnaissance
NASA Astrophysics Data System (ADS)
Walls, Thomas J.; Wilson, Michael L.; Partridge, Darin C.; Haws, Jonathan R.; Jensen, Mark D.; Johnson, Troy R.; Petersen, Brad D.; Sullivan, Stephanie W.
2013-05-01
The capabilities of tactical intelligence, surveillance, and reconnaissance (ISR) payloads are expanding from single sensor imagers to integrated systems-of-systems architectures. Increasingly, these systems-of-systems include multiple sensing modalities that can act as force multipliers for the intelligence analyst. Currently, the separate sensing modalities operate largely independent of one another, providing a selection of operating modes but not an integrated intelligence product. We describe here a Sensor Management System (SMS) designed to provide a small, compact processing unit capable of managing multiple collaborative sensor systems on-board an aircraft. Its purpose is to increase sensor cooperation and collaboration to achieve intelligent data collection and exploitation. The SMS architecture is designed to be largely sensor and data agnostic and provide flexible networked access for both data providers and data consumers. It supports pre-planned and ad-hoc missions, with provisions for on-demand tasking and updates from users connected via data links. Management of sensors and user agents takes place over standard network protocols such that any number and combination of sensors and user agents, either on the local network or connected via data link, can register with the SMS at any time during the mission. The SMS provides control over sensor data collection to handle logging and routing of data products to subscribing user agents. It also supports the addition of algorithmic data processing agents for feature/target extraction and provides for subsequent cueing from one sensor to another. The SMS architecture was designed to scale from a small UAV carrying a limited number of payloads to an aircraft carrying a large number of payloads. The SMS system is STANAG 4575 compliant as a removable memory module (RMM) and can act as a vehicle specific module (VSM) to provide STANAG 4586 compliance (level-3 interoperability) to a non-compliant sensor system. The SMS architecture will be described and results from several flight tests and simulations will be shown.
Onboard Processor for Compressing HSI Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joe; Day, John H. (Technical Monitor)
2002-01-01
With EO-1 Hyperion and MightySat in orbit NASA and the DoD are showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor greater than 100, while retaining the necessary spectral fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our initial spectral compression experiments leverage commercial-off-the-shelf (COTS) spectral exploitation algorithms for segmentation, material identification and spectral compression that ASIT has developed. ASIT will also support the modification and integration of this COTS software into the OBP. Other commercially available COTS software for spatial compression will also be employed as part of the overall compression processing sequence. Over the next year elements of a high-performance reconfigurable OBP will be developed to implement proven preprocessing steps that distill the HSI data stream in both spectral and spatial dimensions. The system will intelligently reduce the volume of data that must be stored, transmitted to the ground, and processed while minimizing the loss of information.
Intelligent Sensors: Strategies for an Integrated Systems Approach
NASA Technical Reports Server (NTRS)
Chitikeshi, Sanjeevi; Mahajan, Ajay; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando
2005-01-01
This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).
Intelligent Sensors: An Integrated Systems Approach
NASA Technical Reports Server (NTRS)
Mahajan, Ajay; Chitikeshi, Sanjeevi; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando
2005-01-01
The need for intelligent sensors as a critical component for Integrated System Health Management (ISHM) is fairly well recognized by now. Even the definition of what constitutes an intelligent sensor (or smart sensor) is well documented and stems from an intuitive desire to get the best quality measurement data that forms the basis of any complex health monitoring and/or management system. If the sensors, i.e. the elements closest to the measurand, are unreliable then the whole system works with a tremendous handicap. Hence, there has always been a desire to distribute intelligence down to the sensor level, and give it the ability to assess its own health thereby improving the confidence in the quality of the data at all times. This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines some fundamental issues in the development of intelligent sensors under the following two categories: Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).
I-ImaS: intelligent imaging sensors
NASA Astrophysics Data System (ADS)
Griffiths, J.; Royle, G.; Esbrand, C.; Hall, G.; Turchetta, R.; Speller, R.
2010-08-01
Conventional x-radiography uniformly irradiates the relevant region of the patient. Across that region, however, there is likely to be significant variation in both the thickness and pathological composition of the tissues present, which means that the x-ray exposure conditions selected, and consequently the image quality achieved, are a compromise. The I-ImaS concept eliminates this compromise by intelligently scanning the patient to identify the important diagnostic features, which are then used to adaptively control the x-ray exposure conditions at each point in the patient. In this way optimal image quality is achieved throughout the region of interest whilst maintaining or reducing the dose. An I-ImaS system has been built under an EU Framework 6 project and has undergone pre-clinical testing. The system is based upon two rows of sensors controlled via an FPGA based DAQ board. Each row consists of a 160 mm × 1 mm linear array of ten scintillator coated 3T CMOS APS devices with 32 μm pixels and a readable array of 520 × 40 pixels. The first sensor row scans the patient using a fraction of the total radiation dose to produce a preview image, which is then interrogated to identify the optimal exposure conditions at each point in the image. A signal is then sent to control a beam filter mechanism to appropriately moderate x-ray beam intensity at the patient as the second row of sensors follows behind. Tests performed on breast tissue sections found that the contrast-to-noise ratio in over 70% of the images was increased by an average of 15% at an average dose reduction of 9%. The same technology is currently also being applied to baggage scanning for airport security.
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
NASA Astrophysics Data System (ADS)
Mahajan, Ajay; Chitikeshi, Sanjeevi; Utterbach, Lucas; Bandhil, Pavan; Figueroa, Fernando
2006-05-01
This paper describes the application of intelligent sensors in the Integrated Systems Health Monitoring (ISHM) as applied to a rocket test stand. The development of intelligent sensors is attempted as an integrated system approach, i.e. one treats the sensors as a complete system with its own physical transducer, A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements associated with the rocket tests stands. These smart elements can be sensors, actuators or other devices. Though the immediate application is the monitoring of the rocket test stands, the technology should be generally applicable to the ISHM vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent sensors (PIS) and Virtual Intelligent Sensors (VIS).
iTRAC : intelligent video compression for automated traffic surveillance systems.
DOT National Transportation Integrated Search
2010-08-01
Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...
Sensor-based architecture for medical imaging workflow analysis.
Silva, Luís A Bastião; Campos, Samuel; Costa, Carlos; Oliveira, José Luis
2014-08-01
The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.
Interactive analysis of geodata based intelligence
NASA Astrophysics Data System (ADS)
Wagner, Boris; Eck, Ralf; Unmüessig, Gabriel; Peinsipp-Byma, Elisabeth
2016-05-01
When a spatiotemporal events happens, multi-source intelligence data is gathered to understand the problem, and strategies for solving the problem are investigated. The difficulties arising from handling spatial and temporal intelligence data represent the main problem. The map might be the bridge to visualize the data and to get the most understand model for all stakeholders. For the analysis of geodata based intelligence data, a software was developed as a working environment that combines geodata with optimized ergonomics. The interaction with the common operational picture (COP) is so essentially facilitated. The composition of the COP is based on geodata services, which are normalized by international standards of the Open Geospatial Consortium (OGC). The basic geodata are combined with intelligence data from images (IMINT) and humans (HUMINT), stored in a NATO Coalition Shared Data Server (CSD). These intelligence data can be combined with further information sources, i.e., live sensors. As a result a COP is generated and an interaction suitable for the specific workspace is added. This allows the users to work interactively with the COP, i.e., searching with an on board CSD client for suitable intelligence data and integrate them into the COP. Furthermore, users can enrich the scenario with findings out of the data of interactive live sensors and add data from other sources. This allows intelligence services to contribute effectively to the process by what military and disaster management are organized.
Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.
Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei
2017-09-22
The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.
Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
Bioinspired polarization navigation sensor for autonomous munitions systems
NASA Astrophysics Data System (ADS)
Giakos, G. C.; Quang, T.; Farrahi, T.; Deshpande, A.; Narayan, C.; Shrestha, S.; Li, Y.; Agarwal, M.
2013-05-01
Small unmanned aerial vehicles UAVs (SUAVs), micro air vehicles (MAVs), Automated Target Recognition (ATR), and munitions guidance, require extreme operational agility and robustness which can be partially offset by efficient bioinspired imaging sensor designs capable to provide enhanced guidance, navigation and control capabilities (GNC). Bioinspired-based imaging technology can be proved useful either for long-distance surveillance of targets in a cluttered environment, or at close distances limited by space surroundings and obstructions. The purpose of this study is to explore the phenomenology of image formation by different insect eye architectures, which would directly benefit the areas of defense and security, on the following four distinct areas: a) fabrication of the bioinspired sensor b) optical architecture, c) topology, and d) artificial intelligence. The outcome of this study indicates that bioinspired imaging can impact the areas of defense and security significantly by dedicated designs fitting into different combat scenarios and applications.
On-road anomaly detection by multimodal sensor analysis and multimedia processing
NASA Astrophysics Data System (ADS)
Orhan, Fatih; Eren, P. E.
2014-03-01
The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.
Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing
Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge
2011-01-01
This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739
Intelligent Sensors and Components for On-Board ISHM
NASA Technical Reports Server (NTRS)
Figueroa, Jorge; Morris, Jon; Nickles, Donald; Schmalzel, Jorge; Rauth, David; Mahajan, Ajay; Utterbach, L.; Oesch, C.
2006-01-01
A viewgraph presentation on the development of intelligent sensors and components for on-board Integrated Systems Health Health Management (ISHM) is shown. The topics include: 1) Motivation; 2) Integrated Systems Health Management (ISHM); 3) Intelligent Components; 4) IEEE 1451; 5)Intelligent Sensors; 6) Application; and 7) Future Directions
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
Image Registration of High-Resolution Uav Data: the New Hypare Algorithm
NASA Astrophysics Data System (ADS)
Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.
2013-08-01
Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.
Automated baseline change detection -- Phases 1 and 2. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byler, E.
1997-10-31
The primary objective of this project is to apply robotic and optical sensor technology to the operational inspection of mixed toxic and radioactive waste stored in barrels, using Automated Baseline Change Detection (ABCD), based on image subtraction. Absolute change detection is based on detecting any visible physical changes, regardless of cause, between a current inspection image of a barrel and an archived baseline image of the same barrel. Thus, in addition to rust, the ABCD system can also detect corrosion, leaks, dents, and bulges. The ABCD approach and method rely on precise camera positioning and repositioning relative to the barrelmore » and on feature recognition in images. The ABCD image processing software was installed on a robotic vehicle developed under a related DOE/FETC contract DE-AC21-92MC29112 Intelligent Mobile Sensor System (IMSS) and integrated with the electronics and software. This vehicle was designed especially to navigate in DOE Waste Storage Facilities. Initial system testing was performed at Fernald in June 1996. After some further development and more extensive integration the prototype integrated system was installed and tested at the Radioactive Waste Management Facility (RWMC) at INEEL beginning in April 1997 through the present (November 1997). The integrated system, composed of ABCD imaging software and IMSS mobility base, is called MISS EVE (Mobile Intelligent Sensor System--Environmental Validation Expert). Evaluation of the integrated system in RWMC Building 628, containing approximately 10,000 drums, demonstrated an easy to use system with the ability to properly navigate through the facility, image all the defined drums, and process the results into a report delivered to the operator on a GUI interface and on hard copy. Further work is needed to make the brassboard system more operationally robust.« less
Intelligent Sensors for Integrated Systems Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John L.
2008-01-01
IEEE 1451 Smart Sensors contribute to a number of ISHM goals including cost reduction achieved through: a) Improved configuration management (TEDS); and b) Plug-and-play re-configuration. Intelligent Sensors are adaptation of Smart Sensors to include ISHM algorithms; this offers further benefits: a) Sensor validation. b) Confidence assessment of measurement, and c) Distributed ISHM processing. Space-qualified intelligent sensors are possible a) Size, mass, power constraints. b) Bus structure/protocol.
OSUS sensor integration in Army experiments
NASA Astrophysics Data System (ADS)
Ganger, Robert; Nowicki, Mark; Kovach, Jesse; Gregory, Timothy; Liss, Brian
2016-05-01
Live sensor data was obtained from an Open Standard for Unattended Sensors (OSUS, formerly Terra Harvest)- based system provided by the Army Research Lab (ARL) and fed into the Communications-Electronics Research, Development and Engineering Center (CERDEC) sponsored Actionable Intelligence Technology Enabled Capabilities Demonstration (AI-TECD) Micro Cloud during the E15 demonstration event that took place at Fort Dix, New Jersey during July 2015. This data was an enabler for other technologies, such as Sensor Assignment to Mission (SAM), Sensor Data Server (SDS), and the AI-TECD Sensor Dashboard, providing rich sensor data (including images) for use by the Company Intel Support Team (CoIST) analyst. This paper describes how the OSUS data was integrated and used in the E15 event to support CoIST operations.
Time Critical Targeting: Predictive Vs Reactionary Methods An Analysis For The Future
2002-06-01
critical targets. To conduct the analysis, a four-step process is used. First, research is conducted to determine which future aircraft, spacecraft , and...the most promising aircraft, spacecraft , and weapons are determined , they are categorized for use in either the reactive or preemptive method. For...no significant delays, 292; Alan Vick et al., 17. 33 Ibid. 12 sensors are Electro-optical (EO) sensors, thermal imagers , and signal intelligence
Intelligent Control for Future Autonomous Distributed Sensor Systems
2007-03-26
recognized, the use of a pre-computed reconfiguration solution that fits the recognized scenario could allow reconfiguration to take place without...This data was loaded into the program developed to visualize the seabed and then the simulation was performed using frames to denote the target...to generate separate images for each eye. Users wear lightweight, inexpensive polarized eyeglasses and see a stereoscopic image. 35 Fig. 10
Machine intelligence and autonomy for aerospace systems
NASA Technical Reports Server (NTRS)
Heer, Ewald (Editor); Lum, Henry (Editor)
1988-01-01
The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.
Color regeneration from reflective color sensor using an artificial intelligent technique.
Saracoglu, Ömer Galip; Altural, Hayriye
2010-01-01
A low-cost optical sensor based on reflective color sensing is presented. Artificial neural network models are used to improve the color regeneration from the sensor signals. Analog voltages of the sensor are successfully converted to RGB colors. The artificial intelligent models presented in this work enable color regeneration from analog outputs of the color sensor. Besides, inverse modeling supported by an intelligent technique enables the sensor probe for use of a colorimetric sensor that relates color changes to analog voltages.
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
Ambient agents: embedded agents for remote control and monitoring using the PANGEA platform.
Villarrubia, Gabriel; De Paz, Juan F; Bajo, Javier; Corchado, Juan M
2014-07-31
Ambient intelligence has advanced significantly during the last few years. The incorporation of image processing and artificial intelligence techniques have opened the possibility for such aspects as pattern recognition, thus allowing for a better adaptation of these systems. This study presents a new model of an embedded agent especially designed to be implemented in sensing devices with resource constraints. This new model of an agent is integrated within the PANGEA (Platform for the Automatic Construction of Organiztions of Intelligent Agents) platform, an organizational-based platform, defining a new sensor role in the system and aimed at providing contextual information and interacting with the environment. A case study was developed over the PANGEA platform and designed using different agents and sensors responsible for providing user support at home in the event of incidents or emergencies. The system presented in the case study incorporates agents in Arduino hardware devices with recognition modules and illuminated bands; it also incorporates IP cameras programmed for automatic tracking, which can connect remotely in the event of emergencies. The user wears a bracelet, which contains a simple vibration sensor that can receive notifications about the emergency situation.
Ambient Agents: Embedded Agents for Remote Control and Monitoring Using the PANGEA Platform
Villarrubia, Gabriel; De Paz, Juan F.; Bajo, Javier; Corchado, Juan M.
2014-01-01
Ambient intelligence has advanced significantly during the last few years. The incorporation of image processing and artificial intelligence techniques have opened the possibility for such aspects as pattern recognition, thus allowing for a better adaptation of these systems. This study presents a new model of an embedded agent especially designed to be implemented in sensing devices with resource constraints. This new model of an agent is integrated within the PANGEA (Platform for the Automatic Construction of Organiztions of Intelligent Agents) platform, an organizational-based platform, defining a new sensor role in the system and aimed at providing contextual information and interacting with the environment. A case study was developed over the PANGEA platform and designed using different agents and sensors responsible for providing user support at home in the event of incidents or emergencies. The system presented in the case study incorporates agents in Arduino hardware devices with recognition modules and illuminated bands; it also incorporates IP cameras programmed for automatic tracking, which can connect remotely in the event of emergencies. The user wears a bracelet, which contains a simple vibration sensor that can receive notifications about the emergency situation. PMID:25090416
2012-04-09
signatures (RSS), in particular, despeckling, superresolution and convergence rate, for a variety of admissible 115 imaging array sensor...attain the superresolution performances in the resulting SSP estimates (3.4), we propose the VA inspired approach [13], [14] to specify the POCS
Selected examples of intelligent (micro) sensor systems: state-of-the-art and tendencies
NASA Astrophysics Data System (ADS)
Hauptmann, Peter R.
2006-03-01
The capability of intelligent sensors to have more intelligence built into them continues to drive their application in areas including automotive, aerospace and defense, industrial, intelligent house and wear, medical and homeland security. In principle it is difficult to overestimate the importance of intelligent (micro) sensors or sensor systems within advanced societies but one characteristic feature is the global market for sensors, which is now about 20 billion annually. Therefore sensors or sensor systems play a dominant role in many fields from the macro sensor in manufacturing industry down to the miniaturized sensor for medical applications. The diversity of sensors precludes a complete description of the state-of-the-art; selected examples will illustrate the current situation. MEMS (microelectromechanical systems) devices are of special interest in the context of micro sensor systems. In past the main requirements of a sensor were in terms of metrological performance. The electrical (or optical) signal produced by the sensor needed to match the measure relatively accurately. Such basic functionality is no longer sufficient. Data processing near the sensor, the extraction of more information than just the direct sensor information by signal analysis, system aspects and multi-sensor information are the new demands. A shifting can be observed away from aiming to design perfect single-function transducers and towards the utilization of system-based sensors as system components. In the ideal case such systems contain sensors, actuators and electronics. They can be realized in monolithic, hybrid or discrete form—which kind is used depends on the application. In this article the state-of-the-art of intelligent sensors or sensor systems is reviewed using selected examples. Future trends are deduced.
A force vector and surface orientation sensor for intelligent grasping
NASA Technical Reports Server (NTRS)
Mcglasson, W. D.; Lorenz, R. D.; Duffie, N. A.; Gale, K. L.
1991-01-01
The paper discusses a force vector and surface orientation sensor suitable for intelligent grasping. The use of a novel four degree-of-freedom force vector robotic fingertip sensor allows efficient, real time intelligent grasping operations. The basis of sensing for intelligent grasping operations is presented and experimental results demonstrate the accuracy and ease of implementation of this approach.
Miss-distance indicator for tank main gun systems
NASA Astrophysics Data System (ADS)
Bornstein, Jonathan A.; Hillis, David B.
1994-07-01
The initial development of a passive, automated system to track bullet trajectories near a target to determine the `miss distance,' and the corresponding correction necessary to bring the following round `on target' is discussed. The system consists of a visible wavelength CCD sensor, long focal length optics, and a separate IR sensor to detect the muzzle flash of the firing event; this is coupled to a `PC' based image processing and automatic tracking system designed to follow the projectile trajectory by intelligently comparing frame to frame variation of the projectile tracer image. An error analysis indicates that the device is particularly sensitive to variation of the projectile time of flight to the target, and requires development of algorithms to estimate this value from the 2D images employed by the sensor to monitor the projectile trajectory. Initial results obtained by using a brassboard prototype to track training ammunition are promising.
Terrain Commander: a next-generation remote surveillance system
NASA Astrophysics Data System (ADS)
Finneral, Henry J.
2003-09-01
Terrain Commander is a fully automated forward observation post that provides the most advanced capability in surveillance and remote situational awareness. The Terrain Commander system was selected by the Australian Government for its NINOX Phase IIB Unattended Ground Sensor Program with the first systems delivered in August of 2002. Terrain Commander offers next generation target detection using multi-spectral peripheral sensors coupled with autonomous day/night image capture and processing. Subsequent intelligence is sent back through satellite communications with unlimited range to a highly sophisticated central monitoring station. The system can "stakeout" remote locations clandestinely for 24 hours a day for months at a time. With its fully integrated SATCOM system, almost any site in the world can be monitored from virtually any other location in the world. Terrain Commander automatically detects and discriminates intruders by precisely cueing its advanced EO subsystem. The system provides target detection capabilities with minimal nuisance alarms combined with the positive visual identification that authorities demand before committing a response. Terrain Commander uses an advanced beamforming acoustic sensor and a distributed array of seismic, magnetic and passive infrared sensors to detect, capture images and accurately track vehicles and personnel. Terrain Commander has a number of emerging military and non-military applications including border control, physical security, homeland defense, force protection and intelligence gathering. This paper reviews the development, capabilities and mission applications of the Terrain Commander system.
Changing requirements and solutions for unattended ground sensors
NASA Astrophysics Data System (ADS)
Prado, Gervasio; Johnson, Robert
2007-10-01
Unattended Ground Sensors (UGS) were first used to monitor Viet Cong activity along the Ho Chi Minh Trail in the 1960's. In the 1980's, significant improvement in the capabilities of UGS became possible with the development of digital signal processors; this led to their use as fire control devices for smart munitions (for example: the Wide Area Mine) and later to monitor the movements of mobile missile launchers. In these applications, the targets of interest were large military vehicles with strong acoustic, seismic and magnetic signatures. Currently, the requirements imposed by new terrorist threats and illegal border crossings have changed the emphasis to the monitoring of light vehicles and foot traffic. These new requirements have changed the way UGS are used. To improve performance against targets with lower emissions, sensors are used in multi-modal arrangements. Non-imaging sensors (acoustic, seismic, magnetic and passive infrared) are now being used principally as activity sensors to cue imagers and remote cameras. The availability of better imaging technology has made imagers the preferred source of "actionable intelligence". Infrared cameras are now based on un-cooled detector-arrays that have made their application in UGS possible in terms of their cost and power consumption. Visible light imagers are also more sensitive extending their utility well beyond twilight. The imagers are equipped with sophisticated image processing capabilities (image enhancement, moving target detection and tracking, image compression). Various commercial satellite services now provide relatively inexpensive long-range communications and the Internet provides fast worldwide access to the data.
Preliminary investigations of active pixel sensors in Nuclear Medicine imaging
NASA Astrophysics Data System (ADS)
Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.
2009-06-01
Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.
Computer vision barrel inspection
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Gunderson, James; Walworth, Matthew E.
1994-02-01
One of the Department of Energy's (DOE) ongoing tasks is the storage and inspection of a large number of waste barrels containing a variety of hazardous substances. Martin Marietta is currently contracted to develop a robotic system -- the Intelligent Mobile Sensor System (IMSS) -- for the automatic monitoring and inspection of these barrels. The IMSS is a mobile robot with multiple sensors: video cameras, illuminators, laser ranging and barcode reader. We assisted Martin Marietta in this task, specifically in the development of image processing algorithms that recognize and classify the barrel labels. Our subsystem uses video images to detect and locate the barcode, so that the barcode reader can be pointed at the barcode.
NASA Astrophysics Data System (ADS)
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
Fiber Optic Force Sensors for MRI-Guided Interventions and Rehabilitation: A Review
Iordachita, Iulian I.; Tokuda, Junichi; Hata, Nobuhiko; Liu, Xuan; Seifabadi, Reza; Xu, Sheng; Wood, Bradford; Fischer, Gregory S.
2017-01-01
Magnetic Resonance Imaging (MRI) provides both anatomical imaging with excellent soft tissue contrast and functional MRI imaging (fMRI) of physiological parameters. The last two decades have witnessed the manifestation of increased interest in MRI-guided minimally invasive intervention procedures and fMRI for rehabilitation and neuroscience research. Accompanying the aspiration to utilize MRI to provide imaging feedback during interventions and brain activity for neuroscience study, there is an accumulated effort to utilize force sensors compatible with the MRI environment to meet the growing demand of these procedures, with the goal of enhanced interventional safety and accuracy, improved efficacy and rehabilitation outcome. This paper summarizes the fundamental principles, the state of the art development and challenges of fiber optic force sensors for MRI-guided interventions and rehabilitation. It provides an overview of MRI-compatible fiber optic force sensors based on different sensing principles, including light intensity modulation, wavelength modulation, and phase modulation. Extensive design prototypes are reviewed to illustrate the detailed implementation of these principles. Advantages and disadvantages of the sensor designs are compared and analyzed. A perspective on the future development of fiber optic sensors is also presented which may have additional broad clinical applications. Future surgical interventions or rehabilitation will rely on intelligent force sensors to provide situational awareness to augment or complement human perception in these procedures. PMID:28652857
Chiang, Kai-Wei; Chang, Hsiu-Wen; Li, Chia-Yuan; Huang, Yun-Wen
2009-01-01
Digital mobile mapping, which integrates digital imaging with direct geo-referencing, has developed rapidly over the past fifteen years. Direct geo-referencing is the determination of the time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using Global Positioning System (GPS) and Inertial Navigation System (INS) using an Inertial Measurement Unit (IMU). They are usually integrated in such a way that the GPS receiver is the main position sensor, while the IMU is the main orientation sensor. The Kalman Filter (KF) is considered as the optimal estimation tool for real-time INS/GPS integrated kinematic position and orientation determination. An intelligent hybrid scheme consisting of an Artificial Neural Network (ANN) and KF has been proposed to overcome the limitations of KF and to improve the performance of the INS/GPS integrated system in previous studies. However, the accuracy requirements of general mobile mapping applications can’t be achieved easily, even by the use of the ANN-KF scheme. Therefore, this study proposes an intelligent position and orientation determination scheme that embeds ANN with conventional Rauch-Tung-Striebel (RTS) smoother to improve the overall accuracy of a MEMS INS/GPS integrated system in post-mission mode. By combining the Micro Electro Mechanical Systems (MEMS) INS/GPS integrated system and the intelligent ANN-RTS smoother scheme proposed in this study, a cheaper but still reasonably accurate position and orientation determination scheme can be anticipated. PMID:22574034
Sensor Webs: Autonomous Rapid Response to Monitor Transient Science Events
NASA Technical Reports Server (NTRS)
Mandl, Dan; Grosvenor, Sandra; Frye, Stu; Sherwood, Robert; Chien, Steve; Davies, Ashley; Cichy, Ben; Ingram, Mary Ann; Langley, John; Miranda, Felix
2005-01-01
To better understand how physical phenomena, such as volcanic eruptions, evolve over time, multiple sensor observations over the duration of the event are required. Using sensor web approaches that integrate original detections by in-situ sensors and global-coverage, lower-resolution, on-orbit assets with automated rapid response observations from high resolution sensors, more observations of significant events can be made with increased temporal, spatial, and spectral resolution. This paper describes experiments using Earth Observing 1 (EO-1) along with other space and ground assets to implement progressive mission autonomy to identify, locate and image with high resolution instruments phenomena such as wildfires, volcanoes, floods and ice breakup. The software that plans, schedules and controls the various satellite assets are used to form ad hoc constellations which enable collaborative autonomous image collections triggered by transient phenomena. This software is both flight and ground based and works in concert to run all of the required assets cohesively and includes software that is model-based, artificial intelligence software.
REVIEW ARTICLE: Sensor communication technology towards ambient intelligence
NASA Astrophysics Data System (ADS)
Delsing, J.; Lindgren, P.
2005-04-01
This paper is a review of the fascinating development of sensors and the communication of sensor data. A brief historical introduction is given, followed by a discussion on architectures for sensor networks. Further, realistic specifications on sensor devices suitable for ambient intelligence and ubiquitous computing are given. Based on these specifications, the status and current frontline development are discussed. In total, it is shown that future technology for ambient intelligence based on sensor and actuator devices using standardized Internet communication is within the range of possibilities within five years.
NASA Technical Reports Server (NTRS)
Schmalzel, John L.; Morris, Jon; Turowski, Mark; Figueroa, Fernando; Oostdyk, Rebecca
2008-01-01
There are a number of architecture models for implementing Integrated Systems Health Management (ISHM) capabilities. For example, approaches based on the OSA-CBM and OSA-EAI models, or specific architectures developed in response to local needs. NASA s John C. Stennis Space Center (SSC) has developed one such version of an extensible architecture in support of rocket engine testing that integrates a palette of functions in order to achieve an ISHM capability. Among the functional capabilities that are supported by the framework are: prognostic models, anomaly detection, a data base of supporting health information, root cause analysis, intelligent elements, and integrated awareness. This paper focuses on the role that intelligent elements can play in ISHM architectures. We define an intelligent element as a smart element with sufficient computing capacity to support anomaly detection or other algorithms in support of ISHM functions. A smart element has the capabilities of supporting networked implementations of IEEE 1451.x smart sensor and actuator protocols. The ISHM group at SSC has been actively developing intelligent elements in conjunction with several partners at other Centers, universities, and companies as part of our ISHM approach for better supporting rocket engine testing. We have developed several implementations. Among the key features for these intelligent sensors is support for IEEE 1451.1 and incorporation of a suite of algorithms for determination of sensor health. Regardless of the potential advantages that can be achieved using intelligent sensors, existing large-scale systems are still based on conventional sensors and data acquisition systems. In order to bring the benefits of intelligent sensors to these environments, we have also developed virtual implementations of intelligent sensors.
NASA Spacecraft Watches as Eruption Reshapes African Volcano
2017-02-23
On Jan. 24, 2017, the Hyperion Imager on NASA's Earth Observing 1 (EO-1) spacecraft observed a new eruption at Erta'Ale volcano, Ethiopia, from an altitude of 438 miles (705 kilometers). Data were collected at a resolution of 98 feet (30 meters) per pixel at different visible and infrared wavelengths and were combined to create these images. A visible-wavelength image is on the left. An infrared image is shown on the right. The infrared image emphasizes the hottest areas and reveals a spectacular rift eruption, where a crack opens and lava gushes forth, fountaining into the air. The lava flows spread away from the crack. Erta'Ale is the location of a long-lived lava lake, and it remains to be seen if this survives this new eruption. The observation was scheduled via the Volcano Sensor Web, a network of sensors linked by artificial intelligence software to create an autonomous global monitoring program of satellite observations of volcanoes. The Volcano Sensor Web was alerted to this new activity by data from another spacecraft. http://photojournal.jpl.nasa.gov/catalog/PIA11239
Bialas, Andrzej
2010-01-01
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems should be developed in a restricted way to provide them with assurance. One of assurance creation methodologies is Common Criteria (ISO/IEC 15408), used for IT products and systems. The contribution of the paper is a Common Criteria compliant and pattern-based method for the intelligent sensors security development. The paper concisely presents this method and its evaluation for the sensor detecting methane in a mine, focusing on the security problem of the intelligent sensor definition and solution. The aim of the validation is to evaluate and improve the introduced method.
Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.
Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin
2018-04-02
Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.
Yap, Florence G H; Yen, Hong-Hsu
2014-02-20
Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/ transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/ processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs.
Yap, Florence G. H.; Yen, Hong-Hsu
2014-01-01
Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs. PMID:24561401
Mobile robots exploration through cnn-based reinforcement learning.
Tai, Lei; Liu, Ming
2016-01-01
Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.
Bialas, Andrzej
2010-01-01
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems should be developed in a restricted way to provide them with assurance. One of assurance creation methodologies is Common Criteria (ISO/IEC 15408), used for IT products and systems. The contribution of the paper is a Common Criteria compliant and pattern-based method for the intelligent sensors security development. The paper concisely presents this method and its evaluation for the sensor detecting methane in a mine, focusing on the security problem of the intelligent sensor definition and solution. The aim of the validation is to evaluate and improve the introduced method. PMID:22399888
2011-07-01
intellectual ability. It is fashioned after the Wechsler Adult Intelligence Scale (Ref 11), which is the most widely used, individually administered test...Multidimensional Aptitude Battery-II Manual, Sigma Assessment Systems Inc., London, 2003. 11. Wechsler D, Wechsler Adult Intelligence Scale® – Third...AFRL-SA-WP-TR-2011-0006 MULTIPLE APTITUDE NORMATIVE INTELLIGENCE TESTING THAT DISTINGUISHES U.S. AIR FORCE MQ-1 PREDATOR SENSOR
An Architecture for Intelligent Systems Based on Smart Sensors
NASA Technical Reports Server (NTRS)
Schmalzel, John; Figueroa, Fernando; Morris, Jon; Mandayam, Shreekanth; Polikar, Robi
2004-01-01
Based on requirements for a next-generation rocket test facility, elements of a prototype Intelligent Rocket Test Facility (IRTF) have been implemented. A key component is distributed smart sensor elements integrated using a knowledgeware environment. One of the specific goals is to imbue sensors with the intelligence needed to perform self diagnosis of health and to participate in a hierarchy of health determination at sensor, process, and system levels. The preliminary results provide the basis for future advanced development and validation using rocket test stand facilities at Stennis Space Center (SSC). We have identified issues important to further development of health-enabled networks, which should be of interest to others working with smart sensors and intelligent health management systems.
Intelligent lead: a novel HRI sensor for guide robots.
Cho, Keum-Bae; Lee, Beom-Hee
2012-01-01
This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.
Accurate positioning based on acoustic and optical sensors
NASA Astrophysics Data System (ADS)
Cai, Kerong; Deng, Jiahao; Guo, Hualing
2009-11-01
Unattended laser target designator (ULTD) was designed to partly take the place of conventional LTDs for accurate positioning and laser marking. Analyzed the precision, accuracy and errors of acoustic sensor array, the requirements of laser generator, and the technology of image analysis and tracking, the major system modules were determined. The target's classification, velocity and position can be measured by sensors, and then coded laser beam will be emitted intelligently to mark the excellent position at the excellent time. The conclusion shows that, ULTD can not only avoid security threats, be deployed massively, and accomplish battle damage assessment (BDA), but also be fit for information-based warfare.
Discrete shaped strain sensors for intelligent structures
NASA Technical Reports Server (NTRS)
Andersson, Mark S.; Crawley, Edward F.
1992-01-01
Design of discrete, highly distributed sensor systems for intelligent structures has been studied. Data obtained indicate that discrete strain-averaging sensors satisfy the functional requirements for distributed sensing of intelligent structures. Bartlett and Gauss-Hanning sensors, in particular, provide good wavenumber characteristics while meeting the functional requirements. They are characterized by good rolloff rates and positive Fourier transforms for all wavenumbers. For the numerical integration schemes, Simpson's rule is considered to be very simple to implement and consistently provides accurate results for five sensors or more. It is shown that a sensor system that satisfies the functional requirements can be applied to a structure that supports mode shapes with purely sinusoidal curvature.
Wireless Monitoring of Automobile Tires for Intelligent Tires
Matsuzaki, Ryosuke; Todoroki, Akira
2008-01-01
This review discusses key technologies of intelligent tires focusing on sensors and wireless data transmission. Intelligent automobile tires, which monitor their pressure, deformation, wheel loading, friction, or tread wear, are expected to improve the reliability of tires and tire control systems. However, in installing sensors in a tire, many problems have to be considered, such as compatibility of the sensors with tire rubber, wireless transmission, and battery installments. As regards sensing, this review discusses indirect methods using existing sensors, such as that for wheel speed, and direct methods, such as surface acoustic wave sensors and piezoelectric sensors. For wireless transmission, passive wireless methods and energy harvesting are also discussed. PMID:27873979
Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2007-01-01
This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.
Fast Human Detection for Intelligent Monitoring Using Surveillance Visible Sensors
Ko, Byoung Chul; Jeong, Mira; Nam, JaeYeal
2014-01-01
Human detection using visible surveillance sensors is an important and challenging work for intruder detection and safety management. The biggest barrier of real-time human detection is the computational time required for dense image scaling and scanning windows extracted from an entire image. This paper proposes fast human detection by selecting optimal levels of image scale using each level's adaptive region-of-interest (ROI). To estimate the image-scaling level, we generate a Hough windows map (HWM) and select a few optimal image scales based on the strength of the HWM and the divide-and-conquer algorithm. Furthermore, adaptive ROIs are arranged per image scale to provide a different search area. We employ a cascade random forests classifier to separate candidate windows into human and nonhuman classes. The proposed algorithm has been successfully applied to real-world surveillance video sequences, and its detection accuracy and computational speed show a better performance than those of other related methods. PMID:25393782
NASA Astrophysics Data System (ADS)
Xu, Zhipeng; Wei, Jun; Li, Jianwei; Zhou, Qianting
2010-11-01
An image spectrometer of a spatial remote sensing satellite requires shortwave band range from 2.1μm to 3μm which is one of the most important bands in remote sensing. We designed an infrared sub-system of the image spectrometer using a homemade 640x1 InGaAs shortwave infrared sensor working on FPA system which requires high uniformity and low level of dark current. The working temperature should be -15+/-0.2 Degree Celsius. This paper studies the model of noise for focal plane array (FPA) system, investigated the relationship with temperature and dark current noise, and adopts Incremental PID algorithm to generate PWM wave in order to control the temperature of the sensor. There are four modules compose of the FPGA module design. All of the modules are coded by VHDL and implemented in FPGA device APA300. Experiment shows the intelligent temperature control system succeeds in controlling the temperature of the sensor.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Smart sensor for terminal homing
NASA Astrophysics Data System (ADS)
Panda, D.; Aggarwal, R.; Hummel, R.
1980-01-01
The practical scene matching problem is considered to present certain complications which must extend classical image processing capabilities. Certain aspects of the scene matching problem which must be addressed by a smart sensor for terminal homing are discussed. First a philosophy for treating the matching problem for the terminal homing scenario is outlined. Then certain aspects of the feature extraction process and symbolic pattern matching are considered. It is thought that in the future general ideas from artificial intelligence will be more useful for terminal homing requirements of fast scene recognition and pattern matching.
Design of intelligent vehicle control system based on single chip microcomputer
NASA Astrophysics Data System (ADS)
Zhang, Congwei
2018-06-01
The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.
Automated site characterization for robotic sample acquisition systems
NASA Astrophysics Data System (ADS)
Scholl, Marija S.; Eberlein, Susan J.
1993-04-01
A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.
Sensor and information fusion for improved hostile fire situational awareness
NASA Astrophysics Data System (ADS)
Scanlon, Michael V.; Ludwig, William D.
2010-04-01
A research-oriented Army Technology Objective (ATO) named Sensor and Information Fusion for Improved Hostile Fire Situational Awareness uniquely focuses on the underpinning technologies to detect and defeat any hostile threat; before, during, and after its occurrence. This is a joint effort led by the Army Research Laboratory, with the Armaments and the Communications and Electronics Research, Development, and Engineering Centers (CERDEC and ARDEC) partners. It addresses distributed sensor fusion and collaborative situational awareness enhancements, focusing on the underpinning technologies to detect/identify potential hostile shooters prior to firing a shot and to detect/classify/locate the firing point of hostile small arms, mortars, rockets, RPGs, and missiles after the first shot. A field experiment conducted addressed not only diverse modality sensor performance and sensor fusion benefits, but gathered useful data to develop and demonstrate the ad hoc networking and dissemination of relevant data and actionable intelligence. Represented at this field experiment were various sensor platforms such as UGS, soldier-worn, manned ground vehicles, UGVs, UAVs, and helicopters. This ATO continues to evaluate applicable technologies to include retro-reflection, UV, IR, visible, glint, LADAR, radar, acoustic, seismic, E-field, narrow-band emission and image processing techniques to detect the threats with very high confidence. Networked fusion of multi-modal data will reduce false alarms and improve actionable intelligence by distributing grid coordinates, detection report features, and imagery of threats.
Small SWAP 3D imaging flash ladar for small tactical unmanned air systems
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.
2015-05-01
The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.
NASA Technical Reports Server (NTRS)
Bandhil, Pavan; Chitikeshi, Sanjeevi; Mahajan, Ajay; Figueroa, Fernando
2005-01-01
This paper proposes the development of intelligent sensors as part of an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA s Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Integrated Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS). The PIS discussed here consists of a thermocouple used to read temperature in an analog form which is then converted into digital values. A microprocessor collects the sensor readings and runs numerous embedded event detection routines on the collected data and if any event is detected, it is reported, stored and sent to a remote system through an Ethernet connection. Hence the output of the PIS is data coupled with confidence factor in the reliability of the data which leads to information on the health of the sensor at all times. All protocols are consistent with IEEE 1451.X standards. This work lays the foundation for the next generation of smart devices that have embedded intelligence for distributed decision making capabilities.
Neural Network Substorm Identification: Enabling TREx Sensor Web Modes
NASA Astrophysics Data System (ADS)
Chaddock, D.; Spanswick, E.; Arnason, K. M.; Donovan, E.; Liang, J.; Ahmad, S.; Jackel, B. J.
2017-12-01
Transition Region Explorer (TREx) is a ground-based sensor web of optical and radio instruments that is presently being deployed across central Canada. The project consists of an array of co-located blue-line, full-colour, and near-infrared all-sky imagers, imaging riometers, proton aurora spectrographs, and GNSS systems. A key goal of the TREx project is to create the world's first (artificial) intelligent sensor web for remote sensing space weather. The sensor web will autonomously control and coordinate instrument operations in real-time. To accomplish this, we will use real-time in-line analytics of TREx and other data to dynamically switch between operational modes. An operating mode could be, for example, to have a blue-line imager gather data at a one or two orders of magnitude higher cadence than it operates for its `baseline' mode. The software decision to increase the imaging cadence would be in response to an anticipated increase in auroral activity or other programmatic requirements. Our first test for TREx's sensor web technologies is to develop the capacity to autonomously alter the TREx operating mode prior to a substorm expansion phase onset. In this paper, we present our neural network analysis of historical optical and riometer data and our ability to predict an optical onset. We explore the preliminary insights into using a neural network to pick out trends and features which it deems are similar among substorms.
Health-Enabled Smart Sensor Fusion Technology
NASA Technical Reports Server (NTRS)
Wang, Ray
2012-01-01
A process was designed to fuse data from multiple sensors in order to make a more accurate estimation of the environment and overall health in an intelligent rocket test facility (IRTF), to provide reliable, high-confidence measurements for a variety of propulsion test articles. The object of the technology is to provide sensor fusion based on a distributed architecture. Specifically, the fusion technology is intended to succeed in providing health condition monitoring capability at the intelligent transceiver, such as RF signal strength, battery reading, computing resource monitoring, and sensor data reading. The technology also provides analytic and diagnostic intelligence at the intelligent transceiver, enhancing the IEEE 1451.x-based standard for sensor data management and distributions, as well as providing appropriate communications protocols to enable complex interactions to support timely and high-quality flow of information among the system elements.
NASA Astrophysics Data System (ADS)
Ren, Y. J.; Zhu, J. G.; Yang, X. Y.; Ye, S. H.
2006-10-01
The Virtex-II Pro FPGA is applied to the vision sensor tracking system of IRB2400 robot. The hardware platform, which undertakes the task of improving SNR and compressing data, is constructed by using the high-speed image processing of FPGA. The lower level image-processing algorithm is realized by combining the FPGA frame and the embedded CPU. The velocity of image processing is accelerated due to the introduction of FPGA and CPU. The usage of the embedded CPU makes it easily to realize the logic design of interface. Some key techniques are presented in the text, such as read-write process, template matching, convolution, and some modules are simulated too. In the end, the compare among the modules using this design, using the PC computer and using the DSP, is carried out. Because the high-speed image processing system core is a chip of FPGA, the function of which can renew conveniently, therefore, to a degree, the measure system is intelligent.
Surveillance and reconnaissance ground system architecture
NASA Astrophysics Data System (ADS)
Devambez, Francois
2001-12-01
Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.
NASA Astrophysics Data System (ADS)
Everson, Jeffrey H.; Kopala, Edward W.; Lazofson, Laurence E.; Choe, Howard C.; Pomerleau, Dean A.
1995-01-01
Optical sensors are used for several ITS applications, including lateral control of vehicles, traffic sign recognition, car following, autonomous vehicle navigation, and obstacle detection. This paper treats the performance assessment of a sensor/image processor used as part of an on-board countermeasure system to prevent single vehicle roadway departure crashes. Sufficient image contrast between objects of interest and backgrounds is an essential factor influencing overall system performance. Contrast is determined by material properties affecting reflected/radiated intensities, as well as weather and visibility conditions. This paper discusses the modeling of these parameters and characterizes the contrast performance effects due to reduced visibility. The analysis process first involves generation of inherent road/off- road contrasts, followed by weather effects as a contrast modification. The sensor is modeled as a charge coupled device (CCD), with variable parameters. The results of the sensor/weather modeling are used to predict the performance on an in-vehicle warning system under various levels of adverse weather. Software employed in this effort was previously developed for the U.S. Air Force Wright Laboratory to determine target/background detection and recognition ranges for different sensor systems operating under various mission scenarios.
Updates to SCORPION persistent surveillance system with universal gateway
NASA Astrophysics Data System (ADS)
Coster, Michael; Chambers, Jon; Winters, Michael; Brunck, Al
2008-10-01
This paper addresses benefits derived from the universal gateway utilized in Northrop Grumman Systems Corporation's (NGSC) SCORPION, a persistent surveillance and target recognition system produced by the Xetron campus in Cincinnati, Ohio. SCORPION is currently deployed in Operations Iraqi Freedom (OIF) and Enduring Freedom (OEF). The SCORPION universal gateway is a flexible, field programmable system that provides integration of over forty Unattended Ground Sensor (UGS) types from a variety of manufacturers, multiple visible and thermal electro-optical (EO) imagers, and numerous long haul satellite and terrestrial communications links, including the Army Research Lab (ARL) Blue Radio. Xetron has been integrating best in class sensors with this universal gateway to provide encrypted data exfiltration to Common Operational Picture (COP) systems and remote sensor command and control since 1998. In addition to being fed to COP systems, SCORPION data can be visualized in the Common sensor Status (CStat) graphical user interface that allows for viewing and analysis of images and sensor data from up to seven hundred SCORPION system gateways on single or multiple displays. This user friendly visualization enables a large amount of sensor data and imagery to be used as actionable intelligence by a minimum number of analysts.
Sensors-network and its application in the intelligent storage security
NASA Astrophysics Data System (ADS)
Zhang, Qingying; Nicolescu, Mihai; Jiang, Xia; Zhang, Ying; Yue, Weihong; Xiao, Weihong
2004-11-01
Intelligent storage systems run on different advanced technologies, such as linear layout, business intelligence and data mining. Security, the basic desire of the storage system, has been focused on with the indraught of multimedia communication technology and sensors" network. Along with the developing of science and the social demands, multifarious alarming system has been designed and improved to be intelligentized, modularized and have network connections. It is of great moment to make the storage, and further more, the logistics system more and more efficient and perfect with modern science and technology. Diversified information on the spot should be caught by different kinds of sensors. Those signals are treated and communicated to the control center to give the further actions. For fire-proofing, broad-spectrum gas sensors, fume sensors, flame sensors and temperature sensors are used to catch the information in their own ways. Once the fire is taken somewhere, the sensors work by the fume, temperature, and flame as well as gas immediately. Meanwhile the intelligent control system starts. It passes the tidings to the center unit. At the same time, it sets those movable walls on to work quickly to obstruct the fire"s spreading. While for guarding the warehouse against theft, cut-off sensors, body sensors, photoelectric sensors, microwave sensors and closed-circuit television as well as electronic clocks are available to monitor the warehouse reasonably. All of those sensors work in a net way. The intelligent control system is made with a digital circuit instead of traditional switch one. This system can work in a better way in many cases. Its reliability is high and the cost is low.
Bialas, Andrzej
2010-01-01
The paper is focused on the security issues of sensors provided with processors and software and used for high-risk applications. Common IT related threats may cause serious consequences for sensor system users. To improve their robustness, sensor systems should be developed in a restricted way that would provide them with assurance. One assurance creation methodology is Common Criteria (ISO/IEC 15408) used for IT products and systems. The paper begins with a primer on the Common Criteria, and then a general security model of the intelligent sensor as an IT product is discussed. The paper presents how the security problem of the intelligent sensor is defined and solved. The contribution of the paper is to provide Common Criteria (CC) related security design patterns and to improve the effectiveness of the sensor development process. PMID:22315571
Further Structural Intelligence for Sensors Cluster Technology in Manufacturing
Mekid, Samir
2006-01-01
With the ever increasing complex sensing and actuating tasks in manufacturing plants, intelligent sensors cluster in hybrid networks becomes a rapidly expanding area. They play a dominant role in many fields from macro and micro scale. Global object control and the ability to self organize into fault-tolerant and scalable systems are expected for high level applications. In this paper, new structural concepts of intelligent sensors and networks with new intelligent agents are presented. Embedding new functionalities to dynamically manage cooperative agents for autonomous machines are interesting key enabling technologies most required in manufacturing for zero defects production.
Ali, Abdulbaset; Hu, Bing; Ramahi, Omar
2015-05-15
This work presents a real life experiment of implementing an artificial intelligence model for detecting sub-millimeter cracks in metallic surfaces on a dataset obtained from a waveguide sensor loaded with metamaterial elements. Crack detection using microwave sensors is typically based on human observation of change in the sensor's signal (pattern) depicted on a high-resolution screen of the test equipment. However, as demonstrated in this work, implementing artificial intelligence to classify cracked from non-cracked surfaces has appreciable impact in terms of sensing sensitivity, cost, and automation. Furthermore, applying artificial intelligence for post-processing data collected from microwave sensors is a cornerstone for handheld test equipment that can outperform rack equipment with large screens and sophisticated plotting features. The proposed method was tested on a metallic plate with different cracks and the obtained experimental results showed good crack classification accuracy rates.
Ali, Abdulbaset; Hu, Bing; Ramahi, Omar M.
2015-01-01
This work presents a real-life experiment implementing an artificial intelligence model for detecting sub-millimeter cracks in metallic surfaces on a dataset obtained from a waveguide sensor loaded with metamaterial elements. Crack detection using microwave sensors is typically based on human observation of change in the sensor's signal (pattern) depicted on a high-resolution screen of the test equipment. However, as demonstrated in this work, implementing artificial intelligence to classify cracked from non-cracked surfaces has appreciable impacts in terms of sensing sensitivity, cost, and automation. Furthermore, applying artificial intelligence for post-processing the data collected from microwave sensors is a cornerstone for handheld test equipment that can outperform rack equipment with large screens and sophisticated plotting features. The proposed method was tested on a metallic plate with different cracks, and the experimental results showed good crack classification accuracy rates. PMID:25988871
Automatic building identification under bomb damage conditions
NASA Astrophysics Data System (ADS)
Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II
2009-05-01
Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.
Open architecture of smart sensor suites
NASA Astrophysics Data System (ADS)
Müller, Wilmuth; Kuwertz, Achim; Grönwall, Christina; Petersson, Henrik; Dekker, Rob; Reinert, Frank; Ditzel, Maarten
2017-10-01
Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network - a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness. Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc. The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohamed Abdelrahman; roger Haggard; Wagdy Mahmoud
The final goal of this project was the development of a system that is capable of controlling an industrial process effectively through the integration of information obtained through intelligent sensor fusion and intelligent control technologies. The industry of interest in this project was the metal casting industry as represented by cupola iron-melting furnaces. However, the developed technology is of generic type and hence applicable to several other industries. The system was divided into the following four major interacting components: 1. An object oriented generic architecture to integrate the developed software and hardware components @. Generic algorithms for intelligent signal analysismore » and sensor and model fusion 3. Development of supervisory structure for integration of intelligent sensor fusion data into the controller 4. Hardware implementation of intelligent signal analysis and fusion algorithms« less
A Brief Overview of NASA Glenn Research Center Sensor and Electronics Activities
NASA Technical Reports Server (NTRS)
Hunter, Gary W.
2012-01-01
Aerospace applications require a range of sensing technologies. There is a range of sensor and sensor system technologies being developed using microfabrication and micromachining technology to form smart sensor systems and intelligent microsystems. Drive system intelligence to the local (sensor) level -- distributed smart sensor systems. Sensor and sensor system development examples: (1) Thin-film physical sensors (2) High temperature electronics and wireless (3) "lick and stick" technology. NASA GRC is a world leader in aerospace sensor technology with a broad range of development and application experience. Core microsystems technology applicable to a range of application environmentS.
Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network
2015-01-01
For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system. PMID:26089863
Network-Capable Application Process and Wireless Intelligent Sensors for ISHM
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Morris, Jon; Turowski, Mark; Wang, Ray
2011-01-01
Intelligent sensor technology and systems are increasingly becoming attractive means to serve as frameworks for intelligent rocket test facilities with embedded intelligent sensor elements, distributed data acquisition elements, and onboard data acquisition elements. Networked intelligent processors enable users and systems integrators to automatically configure their measurement automation systems for analog sensors. NASA and leading sensor vendors are working together to apply the IEEE 1451 standard for adding plug-and-play capabilities for wireless analog transducers through the use of a Transducer Electronic Data Sheet (TEDS) in order to simplify sensor setup, use, and maintenance, to automatically obtain calibration data, and to eliminate manual data entry and error. A TEDS contains the critical information needed by an instrument or measurement system to identify, characterize, interface, and properly use the signal from an analog sensor. A TEDS is deployed for a sensor in one of two ways. First, the TEDS can reside in embedded, nonvolatile memory (typically flash memory) within the intelligent processor. Second, a virtual TEDS can exist as a separate file, downloadable from the Internet. This concept of virtual TEDS extends the benefits of the standardized TEDS to legacy sensors and applications where the embedded memory is not available. An HTML-based user interface provides a visual tool to interface with those distributed sensors that a TEDS is associated with, to automate the sensor management process. Implementing and deploying the IEEE 1451.1-based Network-Capable Application Process (NCAP) can achieve support for intelligent process in Integrated Systems Health Management (ISHM) for the purpose of monitoring, detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, mitigation to maintain operability, and integrated awareness of system health by the operator. It can also support local data collection and storage. This invention enables wide-area sensing and employs numerous globally distributed sensing devices that observe the physical world through the existing sensor network. This innovation enables distributed storage, distributed processing, distributed intelligence, and the availability of DiaK (Data, Information, and Knowledge) to any element as needed. It also enables the simultaneous execution of multiple processes, and represents models that contribute to the determination of the condition and health of each element in the system. The NCAP (intelligent process) can configure data-collection and filtering processes in reaction to sensed data, allowing it to decide when and how to adapt collection and processing with regard to sophisticated analysis of data derived from multiple sensors. The user will be able to view the sensing device network as a single unit that supports a high-level query language. Each query would be able to operate over data collected from across the global sensor network just as a search query encompasses millions of Web pages. The sensor web can preserve ubiquitous information access between the querier and the queried data. Pervasive monitoring of the physical world raises significant data and privacy concerns. This innovation enables different authorities to control portions of the sensing infrastructure, and sensor service authors may wish to compose services across authority boundaries.
Fuzzy logic control for camera tracking system
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant
1992-01-01
A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.
Communications for unattended sensor networks
NASA Astrophysics Data System (ADS)
Nemeroff, Jay L.; Angelini, Paul; Orpilla, Mont; Garcia, Luis; DiPierro, Stefano
2004-07-01
The future model of the US Army's Future Combat Systems (FCS) and the Future Force reflects a combat force that utilizes lighter armor protection than the current standard. Survival on the future battlefield will be increased by the use of advanced situational awareness provided by unattended tactical and urban sensors that detect, identify, and track enemy targets and threats. Successful implementation of these critical sensor fields requires the development of advanced sensors, sensor and data-fusion processors, and a specialized communications network. To ensure warfighter and asset survivability, the communications must be capable of near real-time dissemination of the sensor data using robust, secure, stealthy, and jam resistant links so that the proper and decisive action can be taken. Communications will be provided to a wide-array of mission-specific sensors that are capable of processing data from acoustic, magnetic, seismic, and/or Chemical, Biological, Radiological, and Nuclear (CBRN) sensors. Other, more powerful, sensor node configurations will be capable of fusing sensor data and intelligently collect and process data images from infrared or visual imaging cameras. The radio waveform and networking protocols being developed under the Soldier Level Integrated Communications Environment (SLICE) Soldier Radio Waveform (SRW) and the Networked Sensors for the Future Force Advanced Technology Demonstration are part of an effort to develop a common waveform family which will operate across multiple tactical domains including dismounted soldiers, ground sensor, munitions, missiles and robotics. These waveform technologies will ultimately be transitioned to the JTRS library, specifically the Cluster 5 requirement.
Autonomous Sensors for Large Scale Data Collection
NASA Astrophysics Data System (ADS)
Noto, J.; Kerr, R.; Riccobono, J.; Kapali, S.; Migliozzi, M. A.; Goenka, C.
2017-12-01
Presented here is a novel implementation of a "Doppler imager" which remotely measures winds and temperatures of the neutral background atmosphere at ionospheric altitudes of 87-300Km and possibly above. Incorporating both recent optical manufacturing developments, modern network awareness and the application of machine learning techniques for intelligent self-monitoring and data classification. This system achieves cost savings in manufacturing, deployment and lifetime operating costs. Deployed in both ground and space-based modalities, this cost-disruptive technology will allow computer models of, ionospheric variability and other space weather models to operate with higher precision. Other sensors can be folded into the data collection and analysis architecture easily creating autonomous virtual observatories. A prototype version of this sensor has recently been deployed in Trivandrum India for the Indian Government. This Doppler imager is capable of operation, even within the restricted CubeSat environment. The CubeSat bus offers a very challenging environment, even for small instruments. The lack of SWaP and the challenging thermal environment demand development of a new generation of instruments; the Doppler imager presented is well suited to this environment. Concurrent with this CubeSat development is the development and construction of ground based arrays of inexpensive sensors using the proposed technology. This instrument could be flown inexpensively on one or more CubeSats to provide valuable data to space weather forecasters and ionospheric scientists. Arrays of magnetometers have been deployed for the last 20 years [Alabi, 2005]. Other examples of ground based arrays include an array of white-light all sky imagers (THEMIS) deployed across Canada [Donovan et al., 2006], oceans sensors on buoys [McPhaden et al., 2010], and arrays of seismic sensors [Schweitzer et al., 2002]. A comparable array of Doppler imagers can be constructed and deployed on the ground, to compliment the CubeSat data.
Imaging, Health Record, and Artificial Intelligence: Hype or Hope?
Mazzanti, Marco; Shirka, Ervina; Gjergo, Hortensia; Hasimi, Endri
2018-05-10
The review is focused on "digital health", which means advanced analytics based on multi-modal data. The "Health Care Internet of Things", which uses sensors, apps, and remote monitoring could provide continuous clinical information in the cloud that enables clinicians to access the information they need to care for patients everywhere. Greater standardization of acquisition protocols will be needed to maximize the potential gains from automation and machine learning. Recent artificial intelligence applications on cardiac imaging will not be diagnosing patients and replacing doctors but will be augmenting their ability to find key relevant data they need to care for a patient and present it in a concise, easily digestible format. Risk stratification will transition from oversimplified population-based risk scores to machine learning-based metrics incorporating a large number of patient-specific clinical and imaging variables in real-time beyond the limits of human cognition. This will deliver highly accurate and individual personalized risk assessments and facilitate tailored management plans.
Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa
1995-09-01
Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.
Expanding the Role of Emergency Medical Services in Homeland Security
2013-03-01
1 A. BACKGROUND AND OVERVIEW .............................................................2 B ... B . DATA ANALYSIS .........................................................................................20 III. ANALYSIS AND EVALUATION—EMS AS...INTELLIGENCE SENSORS ......21 A. ACTING AS INTELLIGENCE SENSORS ................................................21 B . PREVENTION MODELS
Vision Guided Intelligent Robot Design And Experiments
NASA Astrophysics Data System (ADS)
Slutzky, G. D.; Hall, E. L.
1988-02-01
The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.
Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis
Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin
2016-01-01
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687
Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.
Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin
2016-06-28
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.
Protecting Networks Via Automated Defense of Cyber Systems
2016-09-01
autonomics, and artificial intelligence . Our conclusion is that automation is the future of cyber defense, and that advances are being made in each of...SUBJECT TERMS Internet of Things, autonomics, sensors, artificial intelligence , cyber defense, active cyber defense, automated indicator sharing...called Automated Defense of Cyber Systems, built upon three core technological components: sensors, autonomics, and artificial intelligence . Our
Smart Distributed Sensor Fields: Algorithms for Tactical Sensors
2013-12-23
ranging from detecting, identifying, localizing/tracking interesting events, discarding irrelevant data, to providing actionable intelligence currently...tracking interesting events, discarding irrelevant data, to providing actionable intelligence currently requires significant human super- vision. Human...view of the overall system. The main idea is to reduce the problem to the relevant data, and then reason intelligently over that data. This process
Reconfigurable intelligent sensors for health monitoring: a case study of pulse oximeter sensor.
Jovanov, E; Milenkovic, A; Basham, S; Clark, D; Kelley, D
2004-01-01
Design of low-cost, miniature, lightweight, ultra low-power, intelligent sensors capable of customization and seamless integration into a body area network for health monitoring applications presents one of the most challenging tasks for system designers. To answer this challenge we propose a reconfigurable intelligent sensor platform featuring a low-power microcontroller, a low-power programmable logic device, a communication interface, and a signal conditioning circuit. The proposed solution promises a cost-effective, flexible platform that allows easy customization, run-time reconfiguration, and energy-efficient computation and communication. The development of a common platform for multiple physical sensors and a repository of both software procedures and soft intellectual property cores for hardware acceleration will increase reuse and alleviate costs of transition to a new generation of sensors. As a case study, we present an implementation of a reconfigurable pulse oximeter sensor.
Commercial Eyes in Space: Implications for U.S. Military Operations in 2030
2008-03-01
r C om m er ci al Im ag er y C om pa ny Su bj ec t Figure 2: Notional Satellite Remote Sensing Flow and...Government will most likely continue to rely on commercial sensors to supplement national intelligence 14 A dv er sa ry U se r C om m er ci al Im ag er y...C om pa ny Su bj ec t Potential Counter ISR Strategies for 2030 Image Request Tasking Recv’d Satellite Tasked Image Processed & Stored
Neurovision processor for designing intelligent sensors
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1992-03-01
A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.
Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems.
Oh, Sang-Il; Kang, Hang-Bong
2017-01-22
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 × 370 image, whereas the original selective search method extracted approximately 10 6 × n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.
Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems
Oh, Sang-Il; Kang, Hang-Bong
2017-01-01
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226×370 image, whereas the original selective search method extracted approximately 106×n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset. PMID:28117742
Review on the Traction System Sensor Technology of a Rail Transit Train.
Feng, Jianghua; Xu, Junfeng; Liao, Wu; Liu, Yong
2017-06-11
The development of high-speed intelligent rail transit has increased the number of sensors applied on trains. These play an important role in train state control and monitoring. These sensors generally work in a severe environment, so the key problem for sensor data acquisition is to ensure data accuracy and reliability. In this paper, we follow the sequence of sensor signal flow, present sensor signal sensing technology, sensor data acquisition, and processing technology, as well as sensor fault diagnosis technology based on the voltage, current, speed, and temperature sensors which are commonly used in train traction systems. Finally, intelligent sensors and future research directions of rail transit train sensors are discussed.
Review on the Traction System Sensor Technology of a Rail Transit Train
Feng, Jianghua; Xu, Junfeng; Liao, Wu; Liu, Yong
2017-01-01
The development of high-speed intelligent rail transit has increased the number of sensors applied on trains. These play an important role in train state control and monitoring. These sensors generally work in a severe environment, so the key problem for sensor data acquisition is to ensure data accuracy and reliability. In this paper, we follow the sequence of sensor signal flow, present sensor signal sensing technology, sensor data acquisition, and processing technology, as well as sensor fault diagnosis technology based on the voltage, current, speed, and temperature sensors which are commonly used in train traction systems. Finally, intelligent sensors and future research directions of rail transit train sensors are discussed. PMID:28604615
Portable Imagery Quality Assessment Test Field for Uav Sensors
NASA Astrophysics Data System (ADS)
Dąbrowski, R.; Jenerowicz, A.
2015-08-01
Nowadays the imagery data acquired from UAV sensors are the main source of all data used in various remote sensing applications, photogrammetry projects and in imagery intelligence (IMINT) as well as in other tasks as decision support. Therefore quality assessment of such imagery is an important task. The research team from Military University of Technology, Faculty of Civil Engineering and Geodesy, Geodesy Institute, Department of Remote Sensing and Photogrammetry has designed and prepared special test field- The Portable Imagery Quality Assessment Test Field (PIQuAT) that provides quality assessment in field conditions of images obtained with sensors mounted on UAVs. The PIQuAT consists of 6 individual segments, when combined allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs. All segments of the PIQuAT can be used together in various configurations or independently. All elements of The Portable Imagery Quality Assessment Test Field were tested in laboratory conditions in terms of their radiometry and spectral reflectance characteristics.
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956
Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur
2012-01-01
This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.
Application of sensor networks to intelligent transportation systems.
DOT National Transportation Integrated Search
2009-12-01
The objective of the research performed is the application of wireless sensor networks to intelligent transportation infrastructures, with the aim of increasing their dependability and improving the efficacy of data collection and utilization. Exampl...
Intelligent modular star and target tracker: a new generation of attitude sensors
NASA Astrophysics Data System (ADS)
Schmidt, Uwe; Strobel, Rainer; Wunder, Dietmar; Graf, Eberhart
2018-04-01
This paper, "Intelligent modular star and target tracker: a new generation of attitude sensors," was presented as part of International Conference on Space Optics—ICSO 1997, held in Toulouse, France.
Fusion of imaging and nonimaging data for surveillance aircraft
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre
1997-06-01
This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).
Hyperspectral Sensors Final Report CRADA No. TC02173.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priest, R. E.; Sauvageau, J. E.
This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Science Applications International Corporation (SAIC), National Security Space Operations/SRBU, to develop longwave infrared (LWIR) hyperspectral imaging (HSI) sensors for airborne and potentially ground and space, platforms. LLNL has designed and developed LWIR HSI sensors since 1995. The current generation of these sensors has applications to users within the U.S. Department of Defense and the Intelligence Community. User needs are for multiple copies provided by commercial industry. To gain the most benefit from the U.S. Government’s prior investments inmore » LWIR HSI sensors developed at LLNL, transfer of technology and know-how from LLNL HSI experts to commercial industry was needed. The overarching purpose of the CRADA project was to facilitate the transfer of the necessary technology from LLNL to SAIC thereby allowing the U.S. Government to procure LWIR HSI sensors from this company.« less
Demonstration of plant fluorescence by imaging technique and Intelligent FluoroSensor
NASA Astrophysics Data System (ADS)
Lenk, Sándor; Gádoros, Patrik; Kocsányi, László; Barócsi, Attila
2015-10-01
Photosynthesis is a process that converts carbon-dioxide into organic compounds, especially into sugars, using the energy of sunlight. The absorbed light energy is used mainly for photosynthesis initiated at the reaction centers of chlorophyll-protein complexes, but part of it is lost as heat and chlorophyll fluorescence. Therefore, the measurement of the latter can be used to estimate the photosynthetic activity. The basic method, when illuminating intact leaves with strong light after a dark adaptation of at least 20 minutes resulting in a transient change of fluorescence emission of the fluorophore chlorophyll-a called `Kautsky effect', is demonstrated by an imaging setup. The experimental kit includes a high radiant blue LED and a CCD camera (or a human eye) equipped with a red transmittance filter to detect the changing fluorescence radiation. However, for the measurement of several fluorescence parameters, describing the plant physiological processes in detail, the variation of several excitation light sources and an adequate detection method are needed. Several fluorescence induction protocols (e.g. traditional Kautsky, pulse amplitude modulated and excitation kinetic), are realized in the Intelligent FluoroSensor instrument. Using it, students are able to measure different plant fluorescence induction curves, quantitatively determine characteristic parameters and qualitatively interpret the measured signals.
Intelligent multi-sensor integrations
NASA Technical Reports Server (NTRS)
Volz, Richard A.; Jain, Ramesh; Weymouth, Terry
1989-01-01
Growth in the intelligence of space systems requires the use and integration of data from multiple sensors. Generic tools are being developed for extracting and integrating information obtained from multiple sources. The full spectrum is addressed for issues ranging from data acquisition, to characterization of sensor data, to adaptive systems for utilizing the data. In particular, there are three major aspects to the project, multisensor processing, an adaptive approach to object recognition, and distributed sensor system integration.
Sparse array of RF sensors for sensing through the wall
NASA Astrophysics Data System (ADS)
Innocenti, Roberto
2007-04-01
In support of the U.S. Army's need for intelligence on the configuration, content, and human presence inside enclosed areas (buildings), the Army Research Laboratory is currently engaged in an effort to evaluate RF sensors for the "Sensing Through The Wall" initiative (STTW).Detection and location of the presence of enemy combatants in urban settings poses significant technical and operational challenges. This paper shows the potential of hand held RF sensors, with the possible assistance of additional sources like Unattended Aerial Vehicles (UAV), Unattended Ground Sensors (UGS), etc, to fulfill this role. In this study we examine both monostatic and multistatic combination of sensors, especially in configurations that allow the capture of images from different angles, and we demonstrate their capability to provide comprehensive information on a variety of buildings. Finally, we explore the limitations of this type of sensor arrangement vis-a-vis the required precision in the knowledge of the position and timing of the RF sensors. Simulation results are provided to show the potential of this type of sensor arrangement in such a difficult environment.
Intelligent On-Board Processing in the Sensor Web
NASA Astrophysics Data System (ADS)
Tanner, S.
2005-12-01
Most existing sensing systems are designed as passive, independent observers. They are rarely aware of the phenomena they observe, and are even less likely to be aware of what other sensors are observing within the same environment. Increasingly, intelligent processing of sensor data is taking place in real-time, using computing resources on-board the sensor or the platform itself. One can imagine a sensor network consisting of intelligent and autonomous space-borne, airborne, and ground-based sensors. These sensors will act independently of one another, yet each will be capable of both publishing and receiving sensor information, observations, and alerts among other sensors in the network. Furthermore, these sensors will be capable of acting upon this information, perhaps altering acquisition properties of their instruments, changing the location of their platform, or updating processing strategies for their own observations to provide responsive information or additional alerts. Such autonomous and intelligent sensor networking capabilities provide significant benefits for collections of heterogeneous sensors within any environment. They are crucial for multi-sensor observations and surveillance, where real-time communication with external components and users may be inhibited, and the environment may be hostile. In all environments, mission automation and communication capabilities among disparate sensors will enable quicker response to interesting, rare, or unexpected events. Additionally, an intelligent network of heterogeneous sensors provides the advantage that all of the sensors can benefit from the unique capabilities of each sensor in the network. The University of Alabama in Huntsville (UAH) is developing a unique approach to data processing, integration and mining through the use of the Adaptive On-Board Data Processing (AODP) framework. AODP is a key foundation technology for autonomous internetworking capabilities to support situational awareness by sensors and their on-board processes. The two primary research areas for this project are (1) the on-board processing and communications framework itself, and (2) data mining algorithms targeted to the needs and constraints of the on-board environment. The team is leveraging its experience in on-board processing, data mining, custom data processing, and sensor network design. Several unique UAH-developed technologies are employed in the AODP project, including EVE, an EnVironmEnt for on-board processing, and the data mining tools included in the Algorithm Development and Mining (ADaM) toolkit.
Optical gateway for intelligent buildings: a new open-up window to the optical fibre sensors market?
NASA Astrophysics Data System (ADS)
Fernandez-Valdivielso, Carlos; Matias, Ignacio R.; Arregui, Francisco J.; Bariain, Candido; Lopez-Amo, Manuel
2004-06-01
This paper presents the first optical fiber sensor gateway for integrating these special measurement devices in Home Automation Systems, concretely in those buildings that use the KNX European Intelligent Buildings Standard.
Standards-based sensor interoperability and networking SensorWeb: an overview
NASA Astrophysics Data System (ADS)
Bolling, Sam
2012-06-01
The War fighter lacks a unified Intelligence, Surveillance, and Reconnaissance (ISR) environment to conduct mission planning, command and control (C2), tasking, collection, exploitation, processing, and data discovery of disparate sensor data across the ISR Enterprise. Legacy sensors and applications are not standardized or integrated for assured, universal access. Existing tasking and collection capabilities are not unified across the enterprise, inhibiting robust C2 of ISR including near-real time, cross-cueing operations. To address these critical needs, the National Measurement and Signature Intelligence (MASINT) Office (NMO), and partnering Combatant Commands and Intelligence Agencies are developing SensorWeb, an architecture that harmonizes heterogeneous sensor data to a common standard for users to discover, access, observe, subscribe to and task sensors. The SensorWeb initiative long term goal is to establish an open commercial standards-based, service-oriented framework to facilitate plug and play sensors. The current development effort will produce non-proprietary deliverables, intended as a Government off the Shelf (GOTS) solution to address the U.S. and Coalition nations' inability to quickly and reliably detect, identify, map, track, and fully understand security threats and operational activities.
Sensor Needs for Control and Health Management of Intelligent Aircraft Engines
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Gang, Sanjay; Hunter, Gary W.; Guo, Ten-Huei; Semega, Kenneth J.
2004-01-01
NASA and the U.S. Department of Defense are conducting programs which support the future vision of "intelligent" aircraft engines for enhancing the affordability, performance, operability, safety, and reliability of aircraft propulsion systems. Intelligent engines will have advanced control and health management capabilities enabling these engines to be self-diagnostic, self-prognostic, and adaptive to optimize performance based upon the current condition of the engine or the current mission of the vehicle. Sensors are a critical technology necessary to enable the intelligent engine vision as they are relied upon to accurately collect the data required for engine control and health management. This paper reviews the anticipated sensor requirements to support the future vision of intelligent engines from a control and health management perspective. Propulsion control and health management technologies are discussed in the broad areas of active component controls, propulsion health management and distributed controls. In each of these three areas individual technologies will be described, input parameters necessary for control feedback or health management will be discussed, and sensor performance specifications for measuring these parameters will be summarized.
Kim, Dae-Hee; Choi, Jae-Hun; Lim, Myung-Eun; Park, Soo-Jun
2008-01-01
This paper suggests the method of correcting distance between an ambient intelligence display and a user based on linear regression and smoothing method, by which distance information of a user who approaches to the display can he accurately output even in an unanticipated condition using a passive infrared VIR) sensor and an ultrasonic device. The developed system consists of an ambient intelligence display and an ultrasonic transmitter, and a sensor gateway. Each module communicates with each other through RF (Radio frequency) communication. The ambient intelligence display includes an ultrasonic receiver and a PIR sensor for motion detection. In particular, this system selects and processes algorithms such as smoothing or linear regression for current input data processing dynamically through judgment process that is determined using the previous reliable data stored in a queue. In addition, we implemented GUI software with JAVA for real time location tracking and an ambient intelligence display.
Sensor and Actuator Needs for More Intelligent Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Schadow, Klaus; Horn, Wolfgang; Pfoertner, Hugo; Stiharu, Ion
2010-01-01
This paper provides an overview of the controls and diagnostics technologies, that are seen as critical for more intelligent gas turbine engines (GTE), with an emphasis on the sensor and actuator technologies that need to be developed for the controls and diagnostics implementation. The objective of the paper is to help the "Customers" of advanced technologies, defense acquisition and aerospace research agencies, understand the state-of-the-art of intelligent GTE technologies, and help the "Researchers" and "Technology Developers" for GTE sensors and actuators identify what technologies need to be developed to enable the "Intelligent GTE" concepts and focus their research efforts on closing the technology gap. To keep the effort manageable, the focus of the paper is on "On-Board Intelligence" to enable safe and efficient operation of the engine over its life time, with an emphasis on gas path performance
Hernandez, Wilmar
2007-01-01
In this paper a survey on recent applications of optimal signal processing techniques to improve the performance of mechanical sensors is made. Here, a comparison between classical filters and optimal filters for automotive sensors is made, and the current state of the art of the application of robust and optimal control and signal processing techniques to the design of the intelligent (or smart) sensors that today's cars need is presented through several experimental results that show that the fusion of intelligent sensors and optimal signal processing techniques is the clear way to go. However, the switch between the traditional methods of designing automotive sensors and the new ones cannot be done overnight because there are some open research issues that have to be solved. This paper draws attention to one of the open research issues and tries to arouse researcher's interest in the fusion of intelligent sensors and optimal signal processing techniques.
Implementation of Integrated System Fault Management Capability
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark
2008-01-01
Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.
Data analysis and integration of environmental sensors to meet human needs
NASA Astrophysics Data System (ADS)
Santamaria, Amilcare Francesco; De Rango, Floriano; Barletta, Domenico; Falbo, Domenico; Imbrogno, Alessandro
2014-05-01
Nowadays one of the main task of technology is to make people's life simpler and easier. Ambient intelligence is an emerging discipline that brings intelligence to environments making them sensitive to us. This discipline has developed following the spread of sensors devices, sensor networks, pervasive computing and artificial intelligence. In this work, we attempt to enhance the Internet Of Things (loT) with intelligence and environments exploring various interactions between humans' beings and the environment they live in. In particular, the core of the system is composed of an automation system, which is made up with a domotic control unit and several sensors installed in the environment. The task of the sensors is to collect information from the environment and to send them to the control unit. Once the information is collected, the core combines them in order to infer the most accurate human needs. The knowledge of human needs and the current environment status compose the inputs of the intelligence block whose main goal is to find the right automations to satisfy human needs in a real time way. The system also provides a Speech Recognition service which allow users to interact with the system by their voice so human speech can be considered as additional input for smart automatisms.
NASA Astrophysics Data System (ADS)
Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.
2017-03-01
Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.
Batchu, S; Narasimhachar, H; Mayeda, J C; Hall, T; Lopez, J; Nguyen, T; Banister, R E; Lie, D Y C
2017-07-01
Doppler-based non-contact vital signs (NCVS) sensors can monitor heart rates, respiration rates, and motions of patients without physically touching them. We have developed a novel single-board Doppler-based phased-array antenna NCVS biosensor system that can perform robust overnight continuous NCVS monitoring with intelligent automatic subject tracking and optimal beam steering algorithms. Our NCVS sensor achieved overnight continuous vital signs monitoring with an impressive heart-rate monitoring accuracy of over 94% (i.e., within ±5 Beats-Per-Minute vs. a reference sensor), analyzed from over 400,000 data points collected during each overnight monitoring period of ~ 6 hours at a distance of 1.75 meters. The data suggests our intelligent phased-array NCVS sensor can be very attractive for continuous monitoring of low-acuity patients.
Affordable and personalized lighting using inverse modeling and virtual sensors
NASA Astrophysics Data System (ADS)
Basu, Chandrayee; Chen, Benjamin; Richards, Jacob; Dhinakaran, Aparna; Agogino, Alice; Martin, Rodney
2014-03-01
Wireless sensor networks (WSN) have great potential to enable personalized intelligent lighting systems while reducing building energy use by 50%-70%. As a result WSN systems are being increasingly integrated in state-ofart intelligent lighting systems. In the future these systems will enable participation of lighting loads as ancillary services. However, such systems can be expensive to install and lack the plug-and-play quality necessary for user-friendly commissioning. In this paper we present an integrated system of wireless sensor platforms and modeling software to enable affordable and user-friendly intelligent lighting. It requires ⇠ 60% fewer sensor deployments compared to current commercial systems. Reduction in sensor deployments has been achieved by optimally replacing the actual photo-sensors with real-time discrete predictive inverse models. Spatially sparse and clustered sub-hourly photo-sensor data captured by the WSN platforms are used to develop and validate a piece-wise linear regression of indoor light distribution. This deterministic data-driven model accounts for sky conditions and solar position. The optimal placement of photo-sensors is performed iteratively to achieve the best predictability of the light field desired for indoor lighting control. Using two weeks of daylight and artificial light training data acquired at the Sustainability Base at NASA Ames, the model was able to predict the light level at seven monitored workstations with 80%-95% accuracy. We estimate that 10% adoption of this intelligent wireless sensor system in commercial buildings could save 0.2-0.25 quads BTU of energy nationwide.
Intelligent correction of laser beam propagation through turbulent media using adaptive optics
NASA Astrophysics Data System (ADS)
Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.
2014-10-01
Adaptive optics methods have long been used by researchers in the astronomy field to retrieve correct images of celestial bodies. The approach is to use a deformable mirror combined with Shack-Hartmann sensors to correct the slightly distorted image when it propagates through the earth's atmospheric boundary layer, which can be viewed as adding relatively weak distortion in the last stage of propagation. However, the same strategy can't be easily applied to correct images propagating along a horizontal deep turbulence path. In fact, when turbulence levels becomes very strong (Cn 2>10-13 m-2/3), limited improvements have been made in correcting the heavily distorted images. We propose a method that reconstructs the light field that reaches the camera, which then provides information for controlling a deformable mirror. An intelligent algorithm is applied that provides significant improvement in correcting images. In our work, the light field reconstruction has been achieved with a newly designed modified plenoptic camera. As a result, by actively intervening with the coherent illumination beam, or by giving it various specific pre-distortions, a better (less turbulence affected) image can be obtained. This strategy can also be expanded to much more general applications such as correcting laser propagation through random media and can also help to improve designs in free space optical communication systems.
Design and implementation of green intelligent lights based on the ZigBee
NASA Astrophysics Data System (ADS)
Gan, Yong; Jia, Chunli; Zou, Dongyao; Yang, Jiajia; Guo, Qianqian
2013-03-01
By analysis of the low degree of intelligence of the traditional lighting control methods, the paper uses the singlechip microcomputer for the control core, and uses a pyroelectric infrared technology to detect the existence of the human body, light sensors to sense the light intensity; the interface uses infrared sensor module, photosensitive sensor module, relay module to transmit the signal, which based on ZigBee wireless network. The main function of the design is to realize that the lighting can intelligently adjust the brightness according to the indoor light intensity when people in door, and it can turn off the light when people left. The circuit and program design of this system is flexible, and the system achieves the effect of intelligent energy saving control.
Range Image Processing for Local Navigation of an Autonomous Land Vehicle.
1986-09-01
such as doing long term exploration missions on the surface of the planets which mankind may wish to investigate . Certainly, mankind will soon return...intelligence programming, walking technology, and vision sensors to name but a few. 10 The purpose of this thesis will be to investigate , by simulation...bitmap graphics, both of which are important to this simulation. Finally, the methodology for displaying the symbolic information generated by the
Sensor Technologies for Intelligent Transportation Systems
Guerrero-Ibáñez, Juan; Zeadally, Sherali
2018-01-01
Modern society faces serious problems with transportation systems, including but not limited to traffic congestion, safety, and pollution. Information communication technologies have gained increasing attention and importance in modern transportation systems. Automotive manufacturers are developing in-vehicle sensors and their applications in different areas including safety, traffic management, and infotainment. Government institutions are implementing roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. By seamlessly integrating vehicles and sensing devices, their sensing and communication capabilities can be leveraged to achieve smart and intelligent transportation systems. We discuss how sensor technology can be integrated with the transportation infrastructure to achieve a sustainable Intelligent Transportation System (ITS) and how safety, traffic control and infotainment applications can benefit from multiple sensors deployed in different elements of an ITS. Finally, we discuss some of the challenges that need to be addressed to enable a fully operational and cooperative ITS environment. PMID:29659524
Sensor Technologies for Intelligent Transportation Systems.
Guerrero-Ibáñez, Juan; Zeadally, Sherali; Contreras-Castillo, Juan
2018-04-16
Modern society faces serious problems with transportation systems, including but not limited to traffic congestion, safety, and pollution. Information communication technologies have gained increasing attention and importance in modern transportation systems. Automotive manufacturers are developing in-vehicle sensors and their applications in different areas including safety, traffic management, and infotainment. Government institutions are implementing roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. By seamlessly integrating vehicles and sensing devices, their sensing and communication capabilities can be leveraged to achieve smart and intelligent transportation systems. We discuss how sensor technology can be integrated with the transportation infrastructure to achieve a sustainable Intelligent Transportation System (ITS) and how safety, traffic control and infotainment applications can benefit from multiple sensors deployed in different elements of an ITS. Finally, we discuss some of the challenges that need to be addressed to enable a fully operational and cooperative ITS environment.
Bialas, Andrzej
2011-01-01
Intelligent sensors experience security problems very similar to those inherent to other kinds of IT products or systems. The assurance for these products or systems creation methodologies, like Common Criteria (ISO/IEC 15408) can be used to improve the robustness of the sensor systems in high risk environments. The paper presents the background and results of the previous research on patterns-based security specifications and introduces a new ontological approach. The elaborated ontology and knowledge base were validated on the IT security development process dealing with the sensor example. The contribution of the paper concerns the application of the knowledge engineering methodology to the previously developed Common Criteria compliant and pattern-based method for intelligent sensor security development. The issue presented in the paper has a broader significance in terms that it can solve information security problems in many application domains.
Posturing Tactical ISR Beyond The Umbilical Cord
2017-02-03
intelligence sensors, it carries a lethal payload of ordinance for strike and or close air support missions. In fact, world media have discussed the MQ-9’s...awareness all their visual and signal intelligence sensors provide is a force multiplier that enhances mission success significantly. For example, when...on C-17 Photo Source http://www.aircav.com/dodphoto/dod98/mh60-002rs.jpg 407MRH multirole armed ISR ( intelligence , surveillance, reconnaissance
Capturing and Modeling Domain Knowledge Using Natural Language Processing Techniques
2005-06-01
Intelligence Artificielle , France, May 2001, p. 109- 118 [Barrière, 2001] -----. “Investigating the Causal Relation in Informative Texts”. Terminology, 7:2...out of the flood of information, military have to create new ways of processing sensor and intelligence information, and of providing the results to...have to create new ways of processing sensor and intelligence information, and of providing the results to commanders who must take timely operational
New optical sensor systems for high-resolution satellite, airborne and terrestrial imaging systems
NASA Astrophysics Data System (ADS)
Eckardt, Andreas; Börner, Anko; Lehmann, Frank
2007-10-01
The department of Optical Information Systems (OS) at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) has more than 25 years experience with high-resolution imaging technology. The technology changes in the development of detectors, as well as the significant change of the manufacturing accuracy in combination with the engineering research define the next generation of spaceborne sensor systems focusing on Earth observation and remote sensing. The combination of large TDI lines, intelligent synchronization control, fast-readable sensors and new focal-plane concepts open the door to new remote-sensing instruments. This class of instruments is feasible for high-resolution sensor systems regarding geometry and radiometry and their data products like 3D virtual reality. Systemic approaches are essential for such designs of complex sensor systems for dedicated tasks. The system theory of the instrument inside a simulated environment is the beginning of the optimization process for the optical, mechanical and electrical designs. Single modules and the entire system have to be calibrated and verified. Suitable procedures must be defined on component, module and system level for the assembly test and verification process. This kind of development strategy allows the hardware-in-the-loop design. The paper gives an overview about the current activities at DLR in the field of innovative sensor systems for photogrammetric and remote sensing purposes.
Cloud screening Coastal Zone Color Scanner images using channel 5
NASA Technical Reports Server (NTRS)
Eckstein, B. A.; Simpson, J. J.
1991-01-01
Clouds are removed from Coastal Zone Color Scanner (CZCS) data using channel 5. Instrumentation problems require pre-processing of channel 5 before an intelligent cloud-screening algorithm can be used. For example, at intervals of about 16 lines, the sensor records anomalously low radiances. Moreover, the calibration equation yields negative radiances when the sensor records zero counts, and pixels corrupted by electronic overshoot must also be excluded. The remaining pixels may then be used in conjunction with the procedure of Simpson and Humphrey to determine the CZCS cloud mask. These results plus in situ observations of phytoplankton pigment concentration show that pre-processing and proper cloud-screening of CZCS data are necessary for accurate satellite-derived pigment concentrations. This is especially true in the coastal margins, where pigment content is high and image distortion associated with electronic overshoot is also present. The pre-processing algorithm is critical to obtaining accurate global estimates of pigment from spacecraft data.
Lampoltshammer, Thomas J.; de Freitas, Edison Pignaton; Nowotny, Thomas; Plank, Stefan; da Costa, João Paulo Carvalho Lustosa; Larsson, Tony; Heistracher, Thomas
2014-01-01
The percentage of elderly people in European countries is increasing. Such conjuncture affects socio-economic structures and creates demands for resourceful solutions, such as Ambient Assisted Living (AAL), which is a possible methodology to foster health care for elderly people. In this context, sensor-based devices play a leading role in surveying, e.g., health conditions of elderly people, to alert care personnel in case of an incident. However, the adoption of such devices strongly depends on the comfort of wearing the devices. In most cases, the bottleneck is the battery lifetime, which impacts the effectiveness of the system. In this paper we propose an approach to reduce the energy consumption of sensors' by use of local sensors' intelligence. By increasing the intelligence of the sensor node, a substantial decrease in the necessary communication payload can be achieved. The results show a significant potential to preserve energy and decrease the actual size of the sensor device units. PMID:24618777
Lampoltshammer, Thomas J; Pignaton de Freitas, Edison; Nowotny, Thomas; Plank, Stefan; da Costa, João Paulo Carvalho Lustosa; Larsson, Tony; Heistracher, Thomas
2014-03-11
The percentage of elderly people in European countries is increasing. Such conjuncture affects socio-economic structures and creates demands for resourceful solutions, such as Ambient Assisted Living (AAL), which is a possible methodology to foster health care for elderly people. In this context, sensor-based devices play a leading role in surveying, e.g., health conditions of elderly people, to alert care personnel in case of an incident. However, the adoption of such devices strongly depends on the comfort of wearing the devices. In most cases, the bottleneck is the battery lifetime, which impacts the effectiveness of the system. In this paper we propose an approach to reduce the energy consumption of sensors' by use of local sensors' intelligence. By increasing the intelligence of the sensor node, a substantial decrease in the necessary communication payload can be achieved. The results show a significant potential to preserve energy and decrease the actual size of the sensor device units.
NASA Astrophysics Data System (ADS)
Gregorio, Massimo De
In this paper we present an intelligent active video surveillance system currently adopted in two different application domains: railway tunnels and outdoor storage areas. The system takes advantages of the integration of Artificial Neural Networks (ANN) and symbolic Artificial Intelligence (AI). This hybrid system is formed by virtual neural sensors (implemented as WiSARD-like systems) and BDI agents. The coupling of virtual neural sensors with symbolic reasoning for interpreting their outputs, makes this approach both very light from a computational and hardware point of view, and rather robust in performances. The system works on different scenarios and in difficult light conditions.
Rivera, José; Carrillo, Mariano; Chacón, Mario; Herrera, Gilberto; Bojorquez, Gilberto
2007-01-01
The development of smart sensors involves the design of reconfigurable systems capable of working with different input sensors. Reconfigurable systems ideally should spend the least possible amount of time in their calibration. An autocalibration algorithm for intelligent sensors should be able to fix major problems such as offset, variation of gain and lack of linearity, as accurately as possible. This paper describes a new autocalibration methodology for nonlinear intelligent sensors based on artificial neural networks, ANN. The methodology involves analysis of several network topologies and training algorithms. The proposed method was compared against the piecewise and polynomial linearization methods. Method comparison was achieved using different number of calibration points, and several nonlinear levels of the input signal. This paper also shows that the proposed method turned out to have a better overall accuracy than the other two methods. Besides, experimentation results and analysis of the complete study, the paper describes the implementation of the ANN in a microcontroller unit, MCU. In order to illustrate the method capability to build autocalibration and reconfigurable systems, a temperature measurement system was designed and tested. The proposed method is an improvement over the classic autocalibration methodologies, because it impacts on the design process of intelligent sensors, autocalibration methodologies and their associated factors, like time and cost.
Intelligence algorithms for autonomous navigation in a ground vehicle
NASA Astrophysics Data System (ADS)
Petkovsek, Steve; Shakya, Rahul; Shin, Young Ho; Gautam, Prasanna; Norton, Adam; Ahlgren, David J.
2012-01-01
This paper will discuss the approach to autonomous navigation used by "Q," an unmanned ground vehicle designed by the Trinity College Robot Study Team to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2011 competition, Q's intelligence was upgraded in several different areas, resulting in a more robust decision-making process and a more reliable system. In 2010-2011, the software of Q was modified to operate in a modular parallel manner, with all subtasks (including motor control, data acquisition from sensors, image processing, and intelligence) running simultaneously in separate software processes using the National Instruments (NI) LabVIEW programming language. This eliminated processor bottlenecks and increased flexibility in the software architecture. Though overall throughput was increased, the long runtime of the image processing process (150 ms) reduced the precision of Q's realtime decisions. Q had slow reaction times to obstacles detected only by its cameras, such as white lines, and was limited to slow speeds on the course. To address this issue, the image processing software was simplified and also pipelined to increase the image processing throughput and minimize the robot's reaction times. The vision software was also modified to detect differences in the texture of the ground, so that specific surfaces (such as ramps and sand pits) could be identified. While previous iterations of Q failed to detect white lines that were not on a grassy surface, this new software allowed Q to dynamically alter its image processing state so that appropriate thresholds could be applied to detect white lines in changing conditions. In order to maintain an acceptable target heading, a path history algorithm was used to deal with local obstacle fields and GPS waypoints were added to provide a global target heading. These modifications resulted in Q placing 5th in the autonomous challenge and 4th in the navigation challenge at IGVC.
Gutiérrez, Marco A; Manso, Luis J; Pandya, Harit; Núñez, Pedro
2017-02-11
Object detection and classification have countless applications in human-robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.
Comparison of turbulence mitigation algorithms
NASA Astrophysics Data System (ADS)
Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric
2017-07-01
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
Greenhouse intelligent control system based on microcontroller
NASA Astrophysics Data System (ADS)
Zhang, Congwei
2018-04-01
As one of the hallmarks of agricultural modernization, intelligent greenhouse has the advantages of high yield, excellent quality, no pollution and continuous planting. Taking AT89S52 microcontroller as the main controller, the greenhouse intelligent control system uses soil moisture sensor, temperature and humidity sensors, light intensity sensor and CO2 concentration sensor to collect measurements and display them on the 12864 LCD screen real-time. Meantime, climate parameter values can be manually set online. The collected measured values are compared with the set standard values, and then the lighting, ventilation fans, warming lamps, water pumps and other facilities automatically start to adjust the climate such as light intensity, CO2 concentration, temperature, air humidity and soil moisture of the greenhouse parameter. So, the state of the environment in the greenhouse Stabilizes and the crop grows in a suitable environment.
Analysis of the frontier technology of agricultural IoT and its predication research
NASA Astrophysics Data System (ADS)
Han, Shuqing; Zhang, Jianhua; Zhu, Mengshuai; Wu, Jianzhai; Shen, Chen; Kong, Fantao
2017-09-01
Agricultural IoT (Internet of Things) develops rapidly. Nanotechnology, biotechnology and optoelectronic technology are successfully integrated into the agricultural sensor technology. Big data, cloud computing and artificial intelligence technology have also been successfully used in IoT. This paper carries out the research on integration of agricultural sensor technology, nanotechnology, biotechnology and optoelectronic technology and the application of big data, cloud computing and artificial intelligence technology in agricultural IoT. The advantages and development of the integration of nanotechnology, biotechnology and optoelectronic technology with agricultural sensor technology were discussed. The application of big data, cloud computing and artificial intelligence technology in IoT and their development trend were analysed.
Visual Sensing for Urban Flood Monitoring
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201
Autonomous vision networking: miniature wireless sensor networks with imaging technology
NASA Astrophysics Data System (ADS)
Messinger, Gioia; Goldberg, Giora
2006-09-01
The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.
Development of a head impact monitoring "Intelligent Mouthguard".
Hedin, Daniel S; Gibson, Paul L; Bartsch, Adam J; Samorezov, Sergey
2016-08-01
The authors present the development and laboratory system-level testing of an impact monitoring "Intelligent Mouthguard" intended to help with identification of potentially concussive head impacts and cumulative head impact dosage. The goal of Intelligent Mouthguard is to provide an indicator of potential concussion risk, and help caregiver identify athletes needing sideline concussion protocol testing. Intelligent Mouthguard may also help identify individuals who are at higher risk based on historical dosage. Intelligent Mouthguard integrates inertial sensors to provide 3-degree of freedom linear and rotational kinematics. The electronics are fully integrated into a custom mouthguard that couples tightly to the upper teeth. The combination of tight coupling and highly accurate sensor data means the Intelligent Mouthguard meets the National Football League (NFL) Level I validity specification based on laboratory system-level test data presented in this study.
An intelligent rollator for mobility impaired persons, especially stroke patients.
Hellström, Thomas; Lindahl, Olof; Bäcklund, Tomas; Karlsson, Marcus; Hohnloser, Peter; Bråndal, Anna; Hu, Xiaolei; Wester, Per
2016-07-01
An intelligent rollator (IRO) was developed that aims at obstacle detection and guidance to avoid collisions and accidental falls. The IRO is a retrofit four-wheeled rollator with an embedded computer, two solenoid brakes, rotation sensors on the wheels and IR-distance sensors. The value reported by each distance sensor was compared in the computer to a nominal distance. Deviations indicated a present obstacle and caused activation of one of the brakes in order to influence the direction of motion to avoid the obstacle. The IRO was tested by seven healthy subjects with simulated restricted and blurred sight and five stroke subjects on a standardised indoor track with obstacles. All tested subjects walked faster with intelligence deactivated. Three out of five stroke patients experienced more detected obstacles with intelligence activated. This suggests enhanced safety during walking with IRO. Further studies are required to explore the full value of the IRO.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)
1987-01-01
The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.
Autonomous Mission Operations for Sensor Webs
NASA Astrophysics Data System (ADS)
Underbrink, A.; Witt, K.; Stanley, J.; Mandl, D.
2008-12-01
We present interim results of a 2005 ROSES AIST project entitled, "Using Intelligent Agents to Form a Sensor Web for Autonomous Mission Operations", or SWAMO. The goal of the SWAMO project is to shift the control of spacecraft missions from a ground-based, centrally controlled architecture to a collaborative, distributed set of intelligent agents. The network of intelligent agents intends to reduce management requirements by utilizing model-based system prediction and autonomic model/agent collaboration. SWAMO agents are distributed throughout the Sensor Web environment, which may include multiple spacecraft, aircraft, ground systems, and ocean systems, as well as manned operations centers. The agents monitor and manage sensor platforms, Earth sensing systems, and Earth sensing models and processes. The SWAMO agents form a Sensor Web of agents via peer-to-peer coordination. Some of the intelligent agents are mobile and able to traverse between on-orbit and ground-based systems. Other agents in the network are responsible for encapsulating system models to perform prediction of future behavior of the modeled subsystems and components to which they are assigned. The software agents use semantic web technologies to enable improved information sharing among the operational entities of the Sensor Web. The semantics include ontological conceptualizations of the Sensor Web environment, plus conceptualizations of the SWAMO agents themselves. By conceptualizations of the agents, we mean knowledge of their state, operational capabilities, current operational capacities, Web Service search and discovery results, agent collaboration rules, etc. The need for ontological conceptualizations over the agents is to enable autonomous and autonomic operations of the Sensor Web. The SWAMO ontology enables automated decision making and responses to the dynamic Sensor Web environment and to end user science requests. The current ontology is compatible with Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) Sensor Model Language (SensorML) concepts and structures. The agents are currently deployed on the U.S. Naval Academy MidSTAR-1 satellite and are actively managing the power subsystem on-orbit without the need for human intervention.
An evaluation of three-dimensional sensors for the extravehicular activity helper/retreiver
NASA Technical Reports Server (NTRS)
Magee, Michael
1993-01-01
The Extravehicular Activity Retriever/Helper (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, accurate sensing of the operational environment and objects in the environment will therefore be critical. The qualitative and quantitative results of empirical studies of three sensors that are capable of providing three-dimensional information to the EVAHR, but using completely different hardware approaches are documented. The first of these devices is a phase shift laser with an effective operating range (ambiguity interval) of approximately 15 meters. The second sensor is a laser triangulation system designed to operate at much closer range and to provide higher resolution images. The third sensor is a dual camera stereo imaging system from which range images can also be obtained. The remainder of the report characterizes the strengths and weaknesses of each of these systems relative to quality of data extracted and how different object characteristics affect sensor operation.
Identifying and Tracking Pedestrians Based on Sensor Fusion and Motion Stability Predictions
Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Mª; de la Escalera, Arturo
2010-01-01
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle. PMID:22163639
Identifying and tracking pedestrians based on sensor fusion and motion stability predictions.
Musleh, Basam; García, Fernando; Otamendi, Javier; Armingol, José Maria; de la Escalera, Arturo
2010-01-01
The lack of trustworthy sensors makes development of Advanced Driver Assistance System (ADAS) applications a tough task. It is necessary to develop intelligent systems by combining reliable sensors and real-time algorithms to send the proper, accurate messages to the drivers. In this article, an application to detect and predict the movement of pedestrians in order to prevent an imminent collision has been developed and tested under real conditions. The proposed application, first, accurately measures the position of obstacles using a two-sensor hybrid fusion approach: a stereo camera vision system and a laser scanner. Second, it correctly identifies pedestrians using intelligent algorithms based on polylines and pattern recognition related to leg positions (laser subsystem) and dense disparity maps and u-v disparity (vision subsystem). Third, it uses statistical validation gates and confidence regions to track the pedestrian within the detection zones of the sensors and predict their position in the upcoming frames. The intelligent sensor application has been experimentally tested with success while tracking pedestrians that cross and move in zigzag fashion in front of a vehicle.
Intelligent Transportation Systems (ITS) plan for Canada : en route to intelligent mobility
DOT National Transportation Integrated Search
1999-11-01
Intelligent Transportation Systems (ITS) include the application of advanced information processing, communications, sensor and control technologies and management strategies in an integrated manner to improve the functioning of the transportation sy...
Ceci n'est pas une micromachine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarberry, Victor R.; Diegert, Carl F.
2010-03-01
The image created in reflected light DIC can often be interpreted as a true three-dimensional representation of the surface geometry, provided a clear distinction can be realized between raised and lowered regions in the specimen. It may be helpful if our definition of saliency embraces work on the human visual system (HVS) as well as the more abstract work on saliency, as it is certain that understanding by humans will always stand between recording of a useful signal from all manner of sensors and so-called actionable intelligence. A DARPA/DSO program lays down this requirement in a current program (Kruse 2010):more » The vision for the Neurotechnology for Intelligence Analysts (NIA) Program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Current computer-based target detection capabilities cannot process vast volumes of imagery with the speed, flexibility, and precision of the human visual system.« less
Intergraph video and images exploitation capabilities
NASA Astrophysics Data System (ADS)
Colla, Simone; Manesis, Charalampos
2013-08-01
The current paper focuses on the capture, fusion and process of aerial imagery in order to leverage full motion video, giving analysts the ability to collect, analyze, and maximize the value of video assets. Unmanned aerial vehicles (UAV) have provided critical real-time surveillance and operational support to military organizations, and are a key source of intelligence, particularly when integrated with other geospatial data. In the current workflow, at first, the UAV operators plan the flight by using a flight planning software. During the flight the UAV send a live video stream directly on the field to be processed by Intergraph software, to generate and disseminate georeferenced images trough a service oriented architecture based on ERDAS Apollo suite. The raw video-based data sources provide the most recent view of a situation and can augment other forms of geospatial intelligence - such as satellite imagery and aerial photos - to provide a richer, more detailed view of the area of interest. To effectively use video as a source of intelligence, however, the analyst needs to seamlessly fuse the video with these other types of intelligence, such as map features and annotations. Intergraph has developed an application that automatically generates mosaicked georeferenced image, tags along the video route which can then be seamlessly integrated with other forms of static data, such as aerial photos, satellite imagery, or geospatial layers and features. Consumers will finally have the ability to use a single, streamlined system to complete the entire geospatial information lifecycle: capturing geospatial data using sensor technology; processing vector, raster, terrain data into actionable information; managing, fusing, and sharing geospatial data and video toghether; and finally, rapidly and securely delivering integrated information products, ensuring individuals can make timely decisions.
Lennernäs, B; Edgren, M; Nilsson, S
1999-01-01
The purpose of this study was to evaluate the precision of a sensor and to ascertain the maximum distance between the sensor and the magnet, in a magnetic positioning system for external beam radiotherapy using a trained artificial intelligence neural network for position determination. Magnetic positioning for radiotherapy, previously described by Lennernäs and Nilsson, is a functional technique, but it is time consuming. The sensors are large and the distance between the sensor and the magnetic implant is limited to short distances. This paper presents a new technique for positioning, using an artificial intelligence neural network, which was trained to position the magnetic implant with at least 0.5 mm resolution in X and Y dimensions. The possibility of using the system for determination in the Z dimension, that is the distance between the magnet and the sensor, was also investigated. After training, this system positioned the magnet with a mean error of maximum 0.15 mm in all dimensions and up to 13 mm from the sensor. Of 400 test positions, 8 determinations had an error larger than 0.5 mm, maximum 0.55 mm. A position was determined in approximately 0.01 s.
An intelligent surveillance platform for large metropolitan areas with dense sensor deployment.
Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A; Smilansky, Zeev
2013-06-07
This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage.
SCORPION II persistent surveillance system update
NASA Astrophysics Data System (ADS)
Coster, Michael; Chambers, Jon
2010-04-01
This paper updates the improvements and benefits demonstrated in the next generation Northrop Grumman SCORPION II family of persistent surveillance and target recognition systems produced by the Xetron Campus in Cincinnati, Ohio. SCORPION II reduces the size, weight, and cost of all SCORPION components in a flexible, field programmable system that is easier to conceal and enables integration of over fifty different Unattended Ground Sensor (UGS) and camera types from a variety of manufacturers, with a modular approach to supporting multiple Line of Sight (LOS) and Beyond Line of Sight (BLOS) communications interfaces. Since 1998 Northrop Grumman has been integrating best in class sensors with its proven universal modular Gateway to provide encrypted data exfiltration to Common Operational Picture (COP) systems and remote sensor command and control. In addition to feeding COP systems, SCORPION and SCORPION II data can be directly processed using a common sensor status graphical user interface (GUI) that allows for viewing and analysis of images and sensor data from up to seven hundred SCORPION system gateways on single or multiple displays. This GUI enables a large amount of sensor data and imagery to be used for actionable intelligence as well as remote sensor command and control by a minimum number of analysts.
A Bluetooth-Based Device Management Platform for Smart Sensor Environment
NASA Astrophysics Data System (ADS)
Lim, Ivan Boon-Kiat; Yow, Kin Choong
In this paper, we propose the use of Bluetooth as the device management platform for the various embedded sensors and actuators in an ambient intelligent environment. We demonstrate the ease of adding Bluetooth capability to common sensor circuits (e.g. motion sensor circuit based on a pyroelectric infrared (PIR) sensor). A central logic application is proposed which controls the operation of controller devices, based on values returned by sensors via Bluetooth. The operation of devices depends on rules that are learnt from user behavior using an Elman recurrent neural network. Overall, Bluetooth has shown its potential in being used as a device management platform in an ambient intelligent environment, which allows sensors and controllers to be deployed even in locations where power sources are not readily available, by using battery power.
Cooperative Surveillance and Pursuit Using Unmanned Aerial Vehicles and Unattended Ground Sensors
Las Fargeas, Jonathan; Kabamba, Pierre; Girard, Anouck
2015-01-01
This paper considers the problem of path planning for a team of unmanned aerial vehicles performing surveillance near a friendly base. The unmanned aerial vehicles do not possess sensors with automated target recognition capability and, thus, rely on communicating with unattended ground sensors placed on roads to detect and image potential intruders. The problem is motivated by persistent intelligence, surveillance, reconnaissance and base defense missions. The problem is formulated and shown to be intractable. A heuristic algorithm to coordinate the unmanned aerial vehicles during surveillance and pursuit is presented. Revisit deadlines are used to schedule the vehicles' paths nominally. The algorithm uses detections from the sensors to predict intruders' locations and selects the vehicles' paths by minimizing a linear combination of missed deadlines and the probability of not intercepting intruders. An analysis of the algorithm's completeness and complexity is then provided. The effectiveness of the heuristic is illustrated through simulations in a variety of scenarios. PMID:25591168
Fiber optic medical pressure-sensing system employing intelligent self-calibration
NASA Astrophysics Data System (ADS)
He, Gang
1996-01-01
In this article, we describe a fiber-optic catheter-type pressure-sensing system that has been successfully introduced for medical diagnostic applications. We present overall sensors and optoelectronics designs, and highlight product development efforts that lead to a reliable and accurate disposable pressure-sensing system. In particular, the incorporation of an intelligent on-site self-calibration approach allows limited sensor reuses for reducing end-user costs and for system adaptation to wide sensor variabilities associated with low-cost manufacturing processes. We demonstrate that fiber-optic sensors can be cost-effectively produced to satisfy needs of certain medical market segments.
Flexible mobile robot system for smart optical pipe inspection
NASA Astrophysics Data System (ADS)
Kampfer, Wolfram; Bartzke, Ralf; Ziehl, Wolfgang
1998-03-01
Damages of pipes can be inspected and graded by TV technology available on the market. Remotely controlled vehicles carry a TV-camera through pipes. Thus, depending on the experience and the capability of the operator, diagnosis failures can not be avoided. The classification of damages requires the knowledge of the exact geometrical dimensions of the damages such as width and depth of cracks, fractures and defect connections. Within the framework of a joint R&D project a sensor based pipe inspection system named RODIAS has been developed with two partners from industry and research institute. It consists of a remotely controlled mobile robot which carries intelligent sensors for on-line sewerage inspection purpose. The sensor is based on a 3D-optical sensor and a laser distance sensor. The laser distance sensor is integrated in the optical system of the camera and can measure the distance between camera and object. The angle of view can be determined from the position of the pan and tilt unit. With coordinate transformations it is possible to calculate the spatial coordinates for every point of the video image. So the geometry of an object can be described exactly. The company Optimess has developed TriScan32, a special software for pipe condition classification. The user can start complex measurements of profiles, pipe displacements or crack widths simply by pressing a push-button. The measuring results are stored together with other data like verbal damage descriptions and digitized images in a data base.
Intelligent sensor-model automated control of PMR-15 autoclave processing
NASA Technical Reports Server (NTRS)
Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.
1992-01-01
An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.
Bialas, Andrzej
2011-01-01
Intelligent sensors experience security problems very similar to those inherent to other kinds of IT products or systems. The assurance for these products or systems creation methodologies, like Common Criteria (ISO/IEC 15408) can be used to improve the robustness of the sensor systems in high risk environments. The paper presents the background and results of the previous research on patterns-based security specifications and introduces a new ontological approach. The elaborated ontology and knowledge base were validated on the IT security development process dealing with the sensor example. The contribution of the paper concerns the application of the knowledge engineering methodology to the previously developed Common Criteria compliant and pattern-based method for intelligent sensor security development. The issue presented in the paper has a broader significance in terms that it can solve information security problems in many application domains. PMID:22164064
Towards a Unified Approach to Information Integration - A review paper on data/information fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Posse, Christian; Lei, Xingye C.
2005-10-14
Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less
NASA Technical Reports Server (NTRS)
2007-01-01
Topics covered include: Miniature Intelligent Sensor Module; "Smart" Sensor Module; Portable Apparatus for Electrochemical Sensing of Ethylene; Increasing Linear Dynamic Range of a CMOS Image Sensor; Flight Qualified Micro Sun Sensor; Norbornene-Based Polymer Electrolytes for Lithium Cells; Making Single-Source Precursors of Ternary Semiconductors; Water-Free Proton-Conducting Membranes for Fuel Cells; Mo/Ti Diffusion Bonding for Making Thermoelectric Devices; Photodetectors on Coronagraph Mask for Pointing Control; High-Energy-Density, Low-Temperature Li/CFx Primary Cells; G4-FETs as Universal and Programmable Logic Gates; Fabrication of Buried Nanochannels From Nanowire Patterns; Diamond Smoothing Tools; Infrared Imaging System for Studying Brain Function; Rarefying Spectra of Whispering-Gallery-Mode Resonators; Large-Area Permanent-Magnet ECR Plasma Source; Slot-Antenna/Permanent-Magnet Device for Generating Plasma; Fiber-Optic Strain Gauge With High Resolution And Update Rate; Broadband Achromatic Telecentric Lens; Temperature-Corrected Model of Turbulence in Hot Jet Flows; Enhanced Elliptic Grid Generation; Automated Knowledge Discovery From Simulators; Electro-Optical Modulator Bias Control Using Bipolar Pulses; Generative Representations for Automated Design of Robots; Mars-Approach Navigation Using In Situ Orbiters; Efficient Optimization of Low-Thrust Spacecraft Trajectories; Cylindrical Asymmetrical Capacitors for Use in Outer Space; Protecting Against Faults in JPL Spacecraft; Algorithm Optimally Allocates Actuation of a Spacecraft; and Radar Interferometer for Topographic Mapping of Glaciers and Ice Sheets.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
NASA Astrophysics Data System (ADS)
Nelson, Matthew P.; Tazik, Shawna K.; Bangalore, Arjun S.; Treado, Patrick J.; Klem, Ethan; Temple, Dorota
2017-05-01
Hyperspectral imaging (HSI) systems can provide detection and identification of a variety of targets in the presence of complex backgrounds. However, current generation sensors are typically large, costly to field, do not usually operate in real time and have limited sensitivity and specificity. Despite these shortcomings, HSI-based intelligence has proven to be a valuable tool, thus resulting in increased demand for this type of technology. By moving the next generation of HSI technology into a more adaptive configuration, and a smaller and more cost effective form factor, HSI technologies can help maintain a competitive advantage for the U.S. armed forces as well as local, state and federal law enforcement agencies. Operating near the physical limits of HSI system capability is often necessary and very challenging, but is often enabled by rigorous modeling of detection performance. Specific performance envelopes we consistently strive to improve include: operating under low signal to background conditions; at higher and higher frame rates; and under less than ideal motion control scenarios. An adaptable, low cost, low footprint, standoff sensor architecture we have been maturing includes the use of conformal liquid crystal tunable filters (LCTFs). These Conformal Filters (CFs) are electro-optically tunable, multivariate HSI spectrometers that, when combined with Dual Polarization (DP) optics, produce optimized spectral passbands on demand, which can readily be reconfigured, to discriminate targets from complex backgrounds in real-time. With DARPA support, ChemImage Sensor Systems (CISS™) in collaboration with Research Triangle Institute (RTI) International are developing a novel, real-time, adaptable, compressive sensing short-wave infrared (SWIR) hyperspectral imaging technology called the Reconfigurable Conformal Imaging Sensor (RCIS) based on DP-CF technology. RCIS will address many shortcomings of current generation systems and offer improvements in operational agility and detection performance, while addressing sensor weight, form factor and cost needs. This paper discusses recent test and performance modeling results of a RCIS breadboard apparatus.
Smart and intelligent sensor payload project
2009-04-01
Engineers working on the smart and intelligent sensor payload project include (l to r): Ed Conley (NASA), Mark Mitchell (Jacobs Technology), Luke Richards (NASA), Robert Drackett (Jacobs Technology), Mark Turowski (Jacobs Technology) , Richard Franzl (seated, Jacobs Technology), Greg McVay (Jacobs Technology), Brianne Guillot (Jacobs Technology), Jon Morris (Jacobs Technology), Stephen Rawls (NASA), John Schmalzel (NASA) and Andrew Bracey (NASA).
An oil fraction neural sensor developed using electrical capacitance tomography sensor data.
Zainal-Mokhtar, Khursiah; Mohamad-Saleh, Junita
2013-08-26
This paper presents novel research on the development of a generic intelligent oil fraction sensor based on Electrical Capacitance Tomography (ECT) data. An artificial Neural Network (ANN) has been employed as the intelligent system to sense and estimate oil fractions from the cross-sections of two-component flows comprising oil and gas in a pipeline. Previous works only focused on estimating the oil fraction in the pipeline based on fixed ECT sensor parameters. With fixed ECT design sensors, an oil fraction neural sensor can be trained to deal with ECT data based on the particular sensor parameters, hence the neural sensor is not generic. This work focuses on development of a generic neural oil fraction sensor based on training a Multi-Layer Perceptron (MLP) ANN with various ECT sensor parameters. On average, the proposed oil fraction neural sensor has shown to be able to give a mean absolute error of 3.05% for various ECT sensor sizes.
An Oil Fraction Neural Sensor Developed Using Electrical capacitance Tomography Sensor Data
Zainal-Mokhtar, Khursiah; Mohamad-Saleh, Junita
2013-01-01
This paper presents novel research on the development of a generic intelligent oil fraction sensor based on Electrical capacitance Tomography (ECT) data. An artificial Neural Network (ANN) has been employed as the intelligent system to sense and estimate oil fractions from the cross-sections of two-component flows comprising oil and gas in a pipeline. Previous works only focused on estimating the oil fraction in the pipeline based on fixed ECT sensor parameters. With fixed ECT design sensors, an oil fraction neural sensor can be trained to deal with ECT data based on the particular sensor parameters, hence the neural sensor is not generic. This work focuses on development of a generic neural oil fraction sensor based on training a Multi-Layer Perceptron (MLP) ANN with various ECT sensor parameters. On average, the proposed oil fraction neural sensor has shown to be able to give a mean absolute error of 3.05% for various ECT sensor sizes. PMID:24064598
NASA Astrophysics Data System (ADS)
Bruschini, Claudio; Charbon, Edoardo; Veerappan, Chockalingam; Braga, Leo H. C.; Massari, Nicola; Perenzoni, Matteo; Gasparini, Leonardo; Stoppa, David; Walker, Richard; Erdogan, Ahmet; Henderson, Robert K.; East, Steve; Grant, Lindsay; Játékos, Balázs; Ujhelyi, Ferenc; Erdei, Gábor; Lörincz, Emöke; André, Luc; Maingault, Laurent; Jacolin, David; Verger, L.; Gros d'Aillon, Eric; Major, Peter; Papp, Zoltan; Nemeth, Gabor
2014-05-01
The SPADnet FP7 European project is aimed at a new generation of fully digital, scalable and networked photonic components to enable large area image sensors, with primary target gamma-ray and coincidence detection in (Time-of- Flight) Positron Emission Tomography (PET). SPADnet relies on standard CMOS technology, therefore allowing for MRI compatibility. SPADnet innovates in several areas of PET systems, from optical coupling to single-photon sensor architectures, from intelligent ring networks to reconstruction algorithms. It is built around a natively digital, intelligent SPAD (Single-Photon Avalanche Diode)-based sensor device which comprises an array of 8×16 pixels, each composed of 4 mini-SiPMs with in situ time-to-digital conversion, a multi-ring network to filter, carry, and process data produced by the sensors at 2Gbps, and a 130nm CMOS process enabling mass-production of photonic modules that are optically interfaced to scintillator crystals. A few tens of sensor devices are tightly abutted on a single PCB to form a so-called sensor tile, thanks to TSV (Through Silicon Via) connections to their backside (replacing conventional wire bonding). The sensor tile is in turn interfaced to an FPGA-based PCB on its back. The resulting photonic module acts as an autonomous sensing and computing unit, individually detecting gamma photons as well as thermal and Compton events. It determines in real time basic information for each scintillation event, such as exact time of arrival, position and energy, and communicates it to its peers in the field of view. Coincidence detection does therefore occur directly in the ring itself, in a differed and distributed manner to ensure scalability. The selected true coincidence events are then collected by a snooper module, from which they are transferred to an external reconstruction computer using Gigabit Ethernet.
A spatial data handling system for retrieval of images by unrestricted regions of user interest
NASA Technical Reports Server (NTRS)
Dorfman, Erik; Cromp, Robert F.
1992-01-01
The Intelligent Data Management (IDM) project at NASA/Goddard Space Flight Center has prototyped an Intelligent Information Fusion System (IIFS), which automatically ingests metadata from remote sensor observations into a large catalog which is directly queryable by end-users. The greatest challenge in the implementation of this catalog was supporting spatially-driven searches, where the user has a possible complex region of interest and wishes to recover those images that overlap all or simply a part of that region. A spatial data management system is described, which is capable of storing and retrieving records of image data regardless of their source. This system was designed and implemented as part of the IIFS catalog. A new data structure, called a hypercylinder, is central to the design. The hypercylinder is specifically tailored for data distributed over the surface of a sphere, such as satellite observations of the Earth or space. Operations on the hypercylinder are regulated by two expert systems. The first governs the ingest of new metadata records, and maintains the efficiency of the data structure as it grows. The second translates, plans, and executes users' spatial queries, performing incremental optimization as partial query results are returned.
NASA Astrophysics Data System (ADS)
Kim, J.; Ryu, Y.; Jiang, C.; Hwang, Y.
2016-12-01
Near surface sensors are able to acquire more reliable and detailed information with higher temporal resolution than satellite observations. Conventional near surface sensors usually work individually, and thus they require considerable manpower from data collection through information extraction and sharing. Recent advances of Internet of Things (IoT) provides unprecedented opportunities to integrate various low-cost sensors as an intelligent near surface observation system for monitoring ecosystem structure and functions. In this study, we developed a Smart Surface Sensing System (4S), which can automatically collect, transfer, process and analyze data, and then publish time series results on public-available website. The system is composed of micro-computer Raspberry pi, micro-controller Arduino, multi-spectral spectrometers made from Light Emitting Diode (LED), visible and near infrared cameras, and Internet module. All components are connected with each other and Raspberry pi intelligently controls the automatic data production chain. We did intensive tests and calibrations in-lab. Then, we conducted in-situ observations at a rice paddy field and a deciduous broadleaf forest. During the whole growth season, 4S obtained landscape images, spectral reflectance in red, green, blue, and near infrared, normalized difference vegetation index (NDVI), fraction of photosynthetically active radiation (fPAR), and leaf area index (LAI) continuously. Also We compared 4S data with other independent measurements. NDVI obtained from 4S agreed well with Jaz hyperspectrometer at both diurnal and seasonal scales (R2 = 0.92, RMSE = 0.059), and 4S derived fPAR and LAI were comparable to LAI-2200 and destructive measurements in both magnitude and seasonal trajectory. We believe that the integrated low-cost near surface sensor could help research community monitoring ecosystem structure and functions closer and easier through a network system.
Videos | Argonne National Laboratory
science --Agent-based modeling --Applied mathematics --Artificial intelligence --Cloud computing management -Intelligence & counterterrorrism -Vulnerability assessment -Sensors & detectors Programs
Large Efficient Intelligent Heating Relay Station System
NASA Astrophysics Data System (ADS)
Wu, C. Z.; Wei, X. G.; Wu, M. Q.
2017-12-01
The design of large efficient intelligent heating relay station system aims at the improvement of the existing heating system in our country, such as low heating efficiency, waste of energy and serious pollution, and the control still depends on the artificial problem. In this design, we first improve the existing plate heat exchanger. Secondly, the ATM89C51 is used to control the whole system and realize the intelligent control. The detection part is using the PT100 temperature sensor, pressure sensor, turbine flowmeter, heating temperature, detection of user end liquid flow, hydraulic, and real-time feedback, feedback signal to the microcontroller through the heating for users to adjust, realize the whole system more efficient, intelligent and energy-saving.
Intelligent Wireless Sensor Networks for System Health Monitoring
NASA Technical Reports Server (NTRS)
Alena, Rick
2011-01-01
Wireless sensor networks (WSN) based on the IEEE 802.15.4 Personal Area Network (PAN) standard are finding increasing use in the home automation and emerging smart energy markets. The network and application layers, based on the ZigBee 2007 Standard, provide a convenient framework for component-based software that supports customer solutions from multiple vendors. WSNs provide the inherent fault tolerance required for aerospace applications. The Discovery and Systems Health Group at NASA Ames Research Center has been developing WSN technology for use aboard aircraft and spacecraft for System Health Monitoring of structures and life support systems using funding from the NASA Engineering and Safety Center and Exploration Technology Development and Demonstration Program. This technology provides key advantages for low-power, low-cost ancillary sensing systems particularly across pressure interfaces and in areas where it is difficult to run wires. Intelligence for sensor networks could be defined as the capability of forming dynamic sensor networks, allowing high-level application software to identify and address any sensor that joined the network without the use of any centralized database defining the sensors characteristics. The IEEE 1451 Standard defines methods for the management of intelligent sensor systems and the IEEE 1451.4 section defines Transducer Electronic Datasheets (TEDS), which contain key information regarding the sensor characteristics such as name, description, serial number, calibration information and user information such as location within a vehicle. By locating the TEDS information on the wireless sensor itself and enabling access to this information base from the application software, the application can identify the sensor unambiguously and interpret and present the sensor data stream without reference to any other information. The application software is able to read the status of each sensor module, responding in real-time to changes of PAN configuration, providing the appropriate response for maintaining overall sensor system function, even when sensor modules fail or the WSN is reconfigured. The session will present the architecture and technical feasibility of creating fault-tolerant WSNs for aerospace applications based on our application of the technology to a Structural Health Monitoring testbed. The interim results of WSN development and testing including our software architecture for intelligent sensor management will be discussed in the context of the specific tradeoffs required for effective use. Initial certification measurement techniques and test results gauging WSN susceptibility to Radio Frequency interference are introduced as key challenges for technology adoption. A candidate Developmental and Flight Instrumentation implementation using intelligent sensor networks for wind tunnel and flight tests is developed as a guide to understanding key aspects of the aerospace vehicle design, test and operations life cycle.
Intelligence, mapping, and geospatial exploitation system (IMAGES)
NASA Astrophysics Data System (ADS)
Moellman, Dennis E.; Cain, Joel M.
1998-08-01
This paper provides further detail to one facet of the battlespace visualization concept described in last year's paper Battlespace Situation Awareness for Force XXI. It focuses on the National Imagery and Mapping Agency (NIMA) goal to 'provide customers seamless access to tailorable imagery, imagery intelligence, and geospatial information.' This paper describes Intelligence, Mapping, and Geospatial Exploitation System (IMAGES), an exploitation element capable of CONUS baseplant operations or field deployment to provide NIMA geospatial information collaboratively into a reconnaissance, surveillance, and target acquisition (RSTA) environment through the United States Imagery and Geospatial Information System (USIGS). In a baseplant CONUS setting IMAGES could be used to produce foundation data to support mission planning. In the field it could be directly associated with a tactical sensor receiver or ground station (e.g. UAV or UGV) to provide near real-time and mission specific RSTA to support mission execution. This paper provides IMAGES functional level design; describes the technologies, their interactions and interdependencies; and presents a notional operational scenario to illustrate the system flexibility. Using as a system backbone an intelligent software agent technology, called Open Agent ArchitectureTM (OAATM), IMAGES combines multimodal data entry, natural language understanding, and perceptual and evidential reasoning for system management. Configured to be DII COE compliant, it would utilize, to the extent possible, COTS applications software for data management, processing, fusion, exploitation, and reporting. It would also be modular, scaleable, and reconfigurable. This paper describes how the OAATM achieves data synchronization and enables the necessary level of information to be rapidly available to various command echelons for making informed decisions. The reasoning component will provide for the best information to be developed in the timeline available and it will also provide statistical pedigree data. This pedigree data provides both uncertainties associated with the information and an audit trail cataloging the raw data sources and the processing/exploitation applied to derive the final product. Collaboration provides for a close union between the information producer(s)/exploiter(s) and the information user(s) as well as between local and remote producer(s)/exploiter(s). From a military operational perspective, IMAGES is a step toward further uniting NIMA with its customers and further blurring the dividing line between operational command and control (C2) and its supporting intelligence activities. IMAGES also provides a foundation for reachback to remote data sources, data stores, application software, and computational resources for achieving 'just-in- time' information delivery -- all of which is transparent to the analyst or operator employing the system.
Integrating Sensor-Collected Intelligence
2008-11-01
collecting, processing, data storage and fusion, and the dissemination of information collected by Intelligence, Surveillance, and Reconnaissance (ISR...Grid – Bandwidth Expansion (GIG-BE) program) to provide the capability to transfer data from sensors to accessible storage and satellite and airborne...based ISR is much more fragile. There was a purposeful drawdown of these systems following the Cold War and modernization programs were planned to
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.
A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires.
Garcia-Pozuelo, Daniel; Olatunbosun, Oluremi; Yunta, Jorge; Yang, Xiaoguang; Diaz, Vicente
2017-02-10
The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic.
A Wireless and Batteryless Intelligent Carbon Monoxide Sensor.
Chen, Chen-Chia; Sung, Gang-Neng; Chen, Wen-Ching; Kuo, Chih-Ting; Chue, Jin-Ju; Wu, Chieh-Ming; Huang, Chun-Ming
2016-09-23
Carbon monoxide (CO) poisoning from natural gas water heaters is a common household accident in Taiwan. We propose a wireless and batteryless intelligent CO sensor for improving the safety of operating natural gas water heaters. A micro-hydropower generator supplies power to a CO sensor without battery (COSWOB) (2.5 W at a flow rate of 4.2 L/min), and the power consumption of the COSWOB is only ~13 mW. The COSWOB monitors the CO concentration in ambient conditions around natural gas water heaters and transmits it to an intelligent gateway. When the CO level reaches a dangerous level, the COSWOB alarm sounds loudly. Meanwhile, the intelligent gateway also sends a trigger to activate Wi-Fi alarms and sends notifications to the mobile device through the Internet. Our strategy can warn people indoors and outdoors, thereby reducing CO poisoning accidents. We also believe that our technique not only can be used for home security but also can be used in industrial applications (for example, to monitor leak occurrence in a pipeline).
A Wireless and Batteryless Intelligent Carbon Monoxide Sensor
Chen, Chen-Chia; Sung, Gang-Neng; Chen, Wen-Ching; Kuo, Chih-Ting; Chue, Jin-Ju; Wu, Chieh-Ming; Huang, Chun-Ming
2016-01-01
Carbon monoxide (CO) poisoning from natural gas water heaters is a common household accident in Taiwan. We propose a wireless and batteryless intelligent CO sensor for improving the safety of operating natural gas water heaters. A micro-hydropower generator supplies power to a CO sensor without battery (COSWOB) (2.5 W at a flow rate of 4.2 L/min), and the power consumption of the COSWOB is only ~13 mW. The COSWOB monitors the CO concentration in ambient conditions around natural gas water heaters and transmits it to an intelligent gateway. When the CO level reaches a dangerous level, the COSWOB alarm sounds loudly. Meanwhile, the intelligent gateway also sends a trigger to activate Wi-Fi alarms and sends notifications to the mobile device through the Internet. Our strategy can warn people indoors and outdoors, thereby reducing CO poisoning accidents. We also believe that our technique not only can be used for home security but also can be used in industrial applications (for example, to monitor leak occurrence in a pipeline). PMID:27669255
A Novel Strain-Based Method to Estimate Tire Conditions Using Fuzzy Logic for Intelligent Tires
Garcia-Pozuelo, Daniel; Olatunbosun, Oluremi; Yunta, Jorge; Yang, Xiaoguang; Diaz, Vicente
2017-01-01
The so-called intelligent tires are one of the most promising research fields for automotive engineers. These tires are equipped with sensors which provide information about vehicle dynamics. Up to now, the commercial intelligent tires only provide information about inflation pressure and their contribution to stability control systems is currently very limited. Nowadays one of the major problems for intelligent tire development is how to embed feasible and low cost sensors to obtain reliable information such as inflation pressure, vertical load or rolling speed. These parameters provide key information for vehicle dynamics characterization. In this paper, we propose a novel algorithm based on fuzzy logic to estimate the mentioned parameters by means of a single strain-based system. Experimental tests have been carried out in order to prove the suitability and durability of the proposed on-board strain sensor system, as well as its low cost advantages, and the accuracy of the obtained estimations by means of fuzzy logic. PMID:28208631
Knowledge Flow Mesh and Its Dynamics: A Decision Support Environment
2008-06-01
paper was the ability of the United States military to achieve dominance through information superiority. The use of intelligent sensors and... Intelligence Agency, National Security Agency, Defense Intelligence Agency, and individual Service intelligence agencies). In fact, these edge entities would... intelligence , design, choice, and implementation. 6. Support variety of decision processes and styles. 7. DSS should be adaptable and flexible. 8. DSS
Improved obstacle avoidance and navigation for an autonomous ground vehicle
NASA Astrophysics Data System (ADS)
Giri, Binod; Cho, Hyunsu; Williams, Benjamin C.; Tann, Hokchhay; Shakya, Bicky; Bharam, Vishal; Ahlgren, David J.
2015-01-01
This paper presents improvements made to the intelligence algorithms employed on Q, an autonomous ground vehicle, for the 2014 Intelligent Ground Vehicle Competition (IGVC). In 2012, the IGVC committee combined the formerly separate autonomous and navigation challenges into a single AUT-NAV challenge. In this new challenge, the vehicle is required to navigate through a grassy obstacle course and stay within the course boundaries (a lane of two white painted lines) that guide it toward a given GPS waypoint. Once the vehicle reaches this waypoint, it enters an open course where it is required to navigate to another GPS waypoint while avoiding obstacles. After reaching the final waypoint, the vehicle is required to traverse another obstacle course before completing the run. Q uses modular parallel software architecture in which image processing, navigation, and sensor control algorithms run concurrently. A tuned navigation algorithm allows Q to smoothly maneuver through obstacle fields. For the 2014 competition, most revisions occurred in the vision system, which detects white lines and informs the navigation component. Barrel obstacles of various colors presented a new challenge for image processing: the previous color plane extraction algorithm would not suffice. To overcome this difficulty, laser range sensor data were overlaid on visual data. Q also participates in the Joint Architecture for Unmanned Systems (JAUS) challenge at IGVC. For 2014, significant updates were implemented: the JAUS component accepted a greater variety of messages and showed better compliance to the JAUS technical standard. With these improvements, Q secured second place in the JAUS competition.
Remote observations of reentering spacecraft including the space shuttle orbiter
NASA Astrophysics Data System (ADS)
Horvath, Thomas J.; Cagle, Melinda F.; Grinstead, Jay H.; Gibson, David M.
Flight measurement is a critical phase in development, validation and certification processes of technologies destined for future civilian and military operational capabilities. This paper focuses on several recent NASA-sponsored remote observations that have provided unique engineering and scientific insights of reentry vehicle flight phenomenology and performance that could not necessarily be obtained with more traditional instrumentation methods such as onboard discrete surface sensors. The missions highlighted include multiple spatially-resolved infrared observations of the NASA Space Shuttle Orbiter during hypersonic reentry from 2009 to 2011, and emission spectroscopy of comparatively small-sized sample return capsules returning from exploration missions. Emphasis has been placed upon identifying the challenges associated with these remote sensing missions with focus on end-to-end aspects that include the initial science objective, selection of the appropriate imaging platform and instrumentation suite, target flight path analysis and acquisition strategy, pre-mission simulations to optimize sensor configuration, logistics and communications during the actual observation. Explored are collaborative opportunities and technology investments required to develop a next-generation quantitative imaging system (i.e., an intelligent sensor and platform) with greater capability, which could more affordably support cross cutting civilian and military flight test needs.
Remote Observations of Reentering Spacecraft Including the Space Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Horvath, Thomas J.; Cagle, Melinda F.; Grinstead, jay H.; Gibson, David
2013-01-01
Flight measurement is a critical phase in development, validation and certification processes of technologies destined for future civilian and military operational capabilities. This paper focuses on several recent NASA-sponsored remote observations that have provided unique engineering and scientific insights of reentry vehicle flight phenomenology and performance that could not necessarily be obtained with more traditional instrumentation methods such as onboard discrete surface sensors. The missions highlighted include multiple spatially-resolved infrared observations of the NASA Space Shuttle Orbiter during hypersonic reentry from 2009 to 2011, and emission spectroscopy of comparatively small-sized sample return capsules returning from exploration missions. Emphasis has been placed upon identifying the challenges associated with these remote sensing missions with focus on end-to-end aspects that include the initial science objective, selection of the appropriate imaging platform and instrumentation suite, target flight path analysis and acquisition strategy, pre-mission simulations to optimize sensor configuration, logistics and communications during the actual observation. Explored are collaborative opportunities and technology investments required to develop a next-generation quantitative imaging system (i.e., an intelligent sensor and platform) with greater capability, which could more affordably support cross cutting civilian and military flight test needs.
An Intelligent Surveillance Platform for Large Metropolitan Areas with Dense Sensor Deployment
Fernández, Jorge; Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio; Alonso-López, Jesus A.; Smilansky, Zeev
2013-01-01
This paper presents an intelligent surveillance platform based on the usage of large numbers of inexpensive sensors designed and developed inside the European Eureka Celtic project HuSIMS. With the aim of maximizing the number of deployable units while keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is based on the usage of inexpensive visual sensors which apply efficient motion detection and tracking algorithms to transform the video signal in a set of motion parameters. In order to automate the analysis of the myriad of data streams generated by the visual sensors, the platform's control center includes an alarm detection engine which comprises three components applying three different Artificial Intelligence strategies in parallel. These strategies are generic, domain-independent approaches which are able to operate in several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The architecture is completed with a versatile communication network which facilitates data collection from the visual sensors and alarm and video stream distribution towards the emergency teams. The resulting surveillance system is extremely suitable for its deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap visual sensors and autonomous alarm detection facilitate dense sensor network deployments for wide and detailed coverage. PMID:23748169
Adaptive neural network/expert system that learns fault diagnosis for different structures
NASA Astrophysics Data System (ADS)
Simon, Solomon H.
1992-08-01
Corporations need better real-time monitoring and control systems to improve productivity by watching quality and increasing production flexibility. The innovative technology to achieve this goal is evolving in the form artificial intelligence and neural networks applied to sensor processing, fusion, and interpretation. By using these advanced Al techniques, we can leverage existing systems and add value to conventional techniques. Neural networks and knowledge-based expert systems can be combined into intelligent sensor systems which provide real-time monitoring, control, evaluation, and fault diagnosis for production systems. Neural network-based intelligent sensor systems are more reliable because they can provide continuous, non-destructive monitoring and inspection. Use of neural networks can result in sensor fusion and the ability to model highly, non-linear systems. Improved models can provide a foundation for more accurate performance parameters and predictions. We discuss a research software/hardware prototype which integrates neural networks, expert systems, and sensor technologies and which can adapt across a variety of structures to perform fault diagnosis. The flexibility and adaptability of the prototype in learning two structures is presented. Potential applications are discussed.
Using multiple sensors for printed circuit board insertion
NASA Technical Reports Server (NTRS)
Sood, Deepak; Repko, Michael C.; Kelley, Robert B.
1989-01-01
As more and more activities are performed in space, there will be a greater demand placed on the information handling capacity of people who are to direct and accomplish these tasks. A promising alternative to full-time human involvement is the use of semi-autonomous, intelligent robot systems. To automate tasks such as assembly, disassembly, repair and maintenance, the issues presented by environmental uncertainties need to be addressed. These uncertainties are introduced by variations in the computed position of the robot at different locations in its work envelope, variations in part positioning, and tolerances of part dimensions. As a result, the robot system may not be able to accomplish the desired task without the help of sensor feedback. Measurements on the environment allow real time corrections to be made to the process. A design and implementation of an intelligent robot system which inserts printed circuit boards into a card cage are presented. Intelligent behavior is accomplished by coupling the task execution sequence with information derived from three different sensors: an overhead three-dimensional vision system, a fingertip infrared sensor, and a six degree of freedom wrist-mounted force/torque sensor.
Study on robot motion control for intelligent welding processes based on the laser tracking sensor
NASA Astrophysics Data System (ADS)
Zhang, Bin; Wang, Qian; Tang, Chen; Wang, Ju
2017-06-01
A robot motion control method is presented for intelligent welding processes of complex spatial free-form curve seams based on the laser tracking sensor. First, calculate the tip position of the welding torch according to the velocity of the torch and the seam trajectory detected by the sensor. Then, search the optimal pose of the torch under constraints using genetic algorithms. As a result, the intersection point of the weld seam and the laser plane of the sensor is within the detectable range of the sensor. Meanwhile, the angle between the axis of the welding torch and the tangent of the weld seam meets the requirements. The feasibility of the control method is proved by simulation.
Ferreira, Pedro M.; Gomes, João M.; Martins, Igor A. C.; Ruano, António E.
2012-01-01
Accurate measurements of global solar radiation and atmospheric temperature, as well as the availability of the predictions of their evolution over time, are important for different areas of applications, such as agriculture, renewable energy and energy management, or thermal comfort in buildings. For this reason, an intelligent, light-weight and portable sensor was developed, using artificial neural network models as the time-series predictor mechanisms. These have been identified with the aid of a procedure based on the multi-objective genetic algorithm. As cloudiness is the most significant factor affecting the solar radiation reaching a particular location on the Earth surface, it has great impact on the performance of predictive solar radiation models for that location. This work also represents one step towards the improvement of such models by using ground-to-sky hemispherical colour digital images as a means to estimate cloudiness by the fraction of visible sky corresponding to clouds and to clear sky. The implementation of predictive models in the prototype has been validated and the system is able to function reliably, providing measurements and four-hour forecasts of cloudiness, solar radiation and air temperature. PMID:23202230
An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.
Concept and integration of an on-line quasi-operational airborne hyperspectral remote sensing system
NASA Astrophysics Data System (ADS)
Schilling, Hendrik; Lenz, Andreas; Gross, Wolfgang; Perpeet, Dominik; Wuttke, Sebastian; Middelmann, Wolfgang
2013-10-01
Modern mission characteristics require the use of advanced imaging sensors in reconnaissance. In particular, high spatial and high spectral resolution imaging provides promising data for many tasks such as classification and detecting objects of military relevance, such as camouflaged units or improvised explosive devices (IEDs). Especially in asymmetric warfare with highly mobile forces, intelligence, surveillance and reconnaissance (ISR) needs to be available close to real-time. This demands the use of unmanned aerial vehicles (UAVs) in combination with downlink capability. The system described in this contribution is integrated in a wing pod for ease of installation and calibration. It is designed for the real-time acquisition and analysis of hyperspectral data. The main component is a Specim AISA Eagle II hyperspectral sensor, covering the visible and near-infrared (VNIR) spectral range with a spectral resolution up to 1.2 nm and 1024 pixel across track, leading to a ground sampling distance below 1 m at typical altitudes. The push broom characteristic of the hyperspectral sensor demands an inertial navigation system (INS) for rectification and georeferencing of the image data. Additional sensors are a high resolution RGB (HR-RGB) frame camera and a thermal imaging camera. For on-line application, the data is preselected, compressed and transmitted to the ground control station (GCS) by an existing system in a second wing pod. The final result after data processing in the GCS is a hyperspectral orthorectified GeoTIFF, which is filed in the ERDAS APOLLO geographical information system. APOLLO allows remote access to the data and offers web-based analysis tools. The system is quasi-operational and was successfully tested in May 2013 in Bremerhaven, Germany.
Improved chemical identification from sensor arrays using intelligent algorithms
NASA Astrophysics Data System (ADS)
Roppel, Thaddeus A.; Wilson, Denise M.
2001-02-01
Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.
Multi Sensor Fusion Using Fitness Adaptive Differential Evolution
NASA Astrophysics Data System (ADS)
Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam
The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).
AGSM Intelligent Devices/Smart Sensors Project
NASA Technical Reports Server (NTRS)
Harp, Janicce Leshay
2014-01-01
This project provides development and qualification of Smart Sensors capable of self-diagnosis and assessment of their capability/readiness to support operations. These sensors will provide pressure and temperature measurements to use in ground systems.
Multi-Source Sensor Fusion for Small Unmanned Aircraft Systems Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Cook, Brandon; Cohen, Kelly
2017-01-01
As the applications for using small Unmanned Aircraft Systems (sUAS) beyond visual line of sight (BVLOS) continue to grow in the coming years, it is imperative that intelligent sensor fusion techniques be explored. In BVLOS scenarios the vehicle position must accurately be tracked over time to ensure no two vehicles collide with one another, no vehicle crashes into surrounding structures, and to identify off-nominal scenarios. Therefore, in this study an intelligent systems approach is used to estimate the position of sUAS given a variety of sensor platforms, including, GPS, radar, and on-board detection hardware. Common research challenges include, asynchronous sensor rates and sensor reliability. In an effort to realize these challenges, techniques such as a Maximum a Posteriori estimation and a Fuzzy Logic based sensor confidence determination are used.
NASA Astrophysics Data System (ADS)
Thomas, Paul A.; Marshall, Gillian; Faulkner, David; Kent, Philip; Page, Scott; Islip, Simon; Oldfield, James; Breckon, Toby P.; Kundegorski, Mikolaj E.; Clark, David J.; Styles, Tim
2016-05-01
Currently, most land Intelligence, Surveillance and Reconnaissance (ISR) assets (e.g. EO/IR cameras) are simply data collectors. Understanding, decision making and sensor control are performed by the human operators, involving high cognitive load. Any automation in the system has traditionally involved bespoke design of centralised systems that are highly specific for the assets/targets/environment under consideration, resulting in complex, non-flexible systems that exhibit poor interoperability. We address a concept of Autonomous Sensor Modules (ASMs) for land ISR, where these modules have the ability to make low-level decisions on their own in order to fulfil a higher-level objective, and plug in, with the minimum of preconfiguration, to a High Level Decision Making Module (HLDMM) through a middleware integration layer. The dual requisites of autonomy and interoperability create challenges around information fusion and asset management in an autonomous hierarchical system, which are addressed in this work. This paper presents the results of a demonstration system, known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT), which was shown in realistic base protection scenarios with live sensors and targets. The SAPIENT system performed sensor cueing, intelligent fusion, sensor tasking, target hand-off and compensation for compromised sensors, without human control, and enabled rapid integration of ISR assets at the time of system deployment, rather than at design-time. Potential benefits include rapid interoperability for coalition operations, situation understanding with low operator cognitive burden and autonomous sensor management in heterogenous sensor systems.
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
Autonomous Mobile Platform for Research in Cooperative Robotics
NASA Technical Reports Server (NTRS)
Daemi, Ali; Pena, Edward; Ferguson, Paul
1998-01-01
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
NASA Astrophysics Data System (ADS)
Kim, Gi Young
The problem we investigate deals with an Image Intelligence (IMINT) sensor allocation schedule for High Altitude Long Endurance UAVs in a dynamic and Anti-Access Area Denial (A2AD) environment. The objective is to maximize the Situational Awareness (SA) of decision makers. The value of SA can be improved in two different ways. First, if a sensor allocated to an Areas of Interest (AOI) detects target activity, then the SA value will be increased. Second, the SA value increases if an AOI is monitored for a certain period of time, regardless of target detections. These values are functions of the sensor allocation time, sensor type and mode. Relatively few studies in the archival literature have been devoted to an analytic, detailed explanation of the target detection process, and AOI monitoring value dynamics. These two values are the fundamental criteria used to choose the most judicious sensor allocation schedule. This research presents mathematical expressions for target detection processes, and shows the monitoring value dynamics. Furthermore, the dynamics of target detection is the result of combined processes between belligerent behavior (target activity) and friendly behavior (sensor allocation). We investigate these combined processes and derive mathematical expressions for simplified cases. These closed form mathematical models can be used for Measures of Effectiveness (MOEs), i.e., target activity detection to evaluate sensor allocation schedules. We also verify these models with discrete event simulations which can also be used to describe more complex systems. We introduce several methodologies to achieve a judicious sensor allocation schedule focusing on the AOI monitoring value. The first methodology is a discrete time integer programming model which provides an optimal solution but is impractical for real world scenarios due to its computation time. Thus, it is necessary to trade off the quality of solution with computation time. The Myopic Greedy Procedure (MGP) is a heuristic which chooses the largest immediate unit time return at each decision epoch. This reduces computation time significantly, but the quality of the solution may be only 95% of optimal (for small size problems). Another alternative is a multi-start random constructive Hybrid Myopic Greedy Procedure (H-MGP), which incorporates stochastic variation in choosing an action at each stage, and repeats it a predetermined number of times (roughly 99.3% of optimal with 1000 repetitions). Finally, the One Stage Look Ahead (OSLA) procedure considers all the 'top choices' at each stage for a temporary time horizon and chooses the best action (roughly 98.8% of optimal with no repetition). Using OSLA procedure, we can have ameliorated solutions within a reasonable computation time. Other important issues discussed in this research are methodologies for the development of input parameters for real world applications.
SCORPION II persistent surveillance system with universal gateway
NASA Astrophysics Data System (ADS)
Coster, Michael; Chambers, Jonathan; Brunck, Albert
2009-05-01
This paper addresses improvements and benefits derived from the next generation Northrop Grumman SCORPION II family of persistent surveillance and target recognition systems produced by the Xetron campus in Cincinnati, Ohio. SCORPION II reduces the size, weight, and cost of all SCORPION components in a flexible, field programmable system that is easier to conceal, backward compatible, and enables integration of over forty Unattended Ground Sensor (UGS) and camera types from a variety of manufacturers, with a modular approach to supporting multiple Line of Sight (LOS) and Beyond Line of Sight (BLOS) communications interfaces. Since 1998 Northrop Grumman has been integrating best in class sensors with its proven universal modular Gateway to provide encrypted data exfiltration to Common Operational Picture (COP) systems and remote sensor command and control. In addition to being fed to COP systems, SCORPION and SCORPION II data can be directly processed using a common sensor status graphical user interface (GUI) that allows for viewing and analysis of images and sensor data from up to seven hundred SCORPION system Gateways on single or multiple displays. This GUI enables a large amount of sensor data and imagery to be used for actionable intelligence as well as remote sensor command and control by a minimum number of analysts.
Integrated intelligent sensor for the textile industry
NASA Astrophysics Data System (ADS)
Peltie, Philippe; David, Dominique
1996-08-01
A new sensor has been developed for pantyhose inspection. Unlike a first complete inspection machine devoted to post- manufacturing control of the whole panty, this sensor will be directly integrated on currently existing manufacturing machines, and will combine advantages of miniaturization is to design an intelligent, compact and very cheap product, which should be integrated without requiring any modifications of host machines. The sensor part was designed to achieve closed acquisition, and various solutions have been explored to maintain adequate depth of field. The illumination source will be integrated in the device. The processing part will include correction facilities and electronic processing. Finally, high-level information will be output in order to interface directly with the manufacturing machine automate.
Vehicle following controller design for autonomous intelligent vehicles
NASA Technical Reports Server (NTRS)
Chien, C. C.; Lai, M. C.; Mayr, R.
1994-01-01
A new vehicle following controller is proposed for autonomous intelligent vehicles. The proposed vehicle following controller not only provides smooth transient maneuvers for unavoidable nonzero initial conditions but also guarantees the asymptotic platoon stability without the availability of feedforward information. Furthermore, the achieved asymptotic platoon stability is shown to be robust to sensor delays and an upper bound for the allowable sensor delays is also provided in this paper.
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Kurtkaya, Mehmet; Duyar, Ahmet
1994-01-01
This paper presents an application of a fault detection and diagnosis scheme for the sensor faults of a helicopter engine. The scheme utilizes a model-based approach with real time identification and hypothesis testing which can provide early detection, isolation, and diagnosis of failures. It is an integral part of a proposed intelligent control system with health monitoring capabilities. The intelligent control system will allow for accommodation of faults, reduce maintenance cost, and increase system availability. The scheme compares the measured outputs of the engine with the expected outputs of an engine whose sensor suite is functioning normally. If the differences between the real and expected outputs exceed threshold values, a fault is detected. The isolation of sensor failures is accomplished through a fault parameter isolation technique where parameters which model the faulty process are calculated on-line with a real-time multivariable parameter estimation algorithm. The fault parameters and their patterns can then be analyzed for diagnostic and accommodation purposes. The scheme is applied to the detection and diagnosis of sensor faults of a T700 turboshaft engine. Sensor failures are induced in a T700 nonlinear performance simulation and data obtained are used with the scheme to detect, isolate, and estimate the magnitude of the faults.
A Universal Intelligent System-on-Chip Based Sensor Interface
Mattoli, Virgilio; Mondini, Alessio; Mazzolai, Barbara; Ferri, Gabriele; Dario, Paolo
2010-01-01
The need for real-time/reliable/low-maintenance distributed monitoring systems, e.g., wireless sensor networks, has been becoming more and more evident in many applications in the environmental, agro-alimentary, medical, and industrial fields. The growing interest in technologies related to sensors is an important indicator of these new needs. The design and the realization of complex and/or distributed monitoring systems is often difficult due to the multitude of different electronic interfaces presented by the sensors available on the market. To address these issues the authors propose the concept of a Universal Intelligent Sensor Interface (UISI), a new low-cost system based on a single commercial chip able to convert a generic transducer into an intelligent sensor with multiple standardized interfaces. The device presented offers a flexible analog and/or digital front-end, able to interface different transducer typologies (such as conditioned, unconditioned, resistive, current output, capacitive and digital transducers). The device also provides enhanced processing and storage capabilities, as well as a configurable multi-standard output interface (including plug-and-play interface based on IEEE 1451.3). In this work the general concept of UISI and the design of reconfigurable hardware are presented, together with experimental test results validating the proposed device. PMID:22163624
Hall, Travis; Nguyen, Tam Q.; Mayeda, Jill C.; Lie, Paul E.; Lopez, Jerry; Banister, Ron E.
2017-01-01
It has been the dream of many scientists and engineers to realize a non-contact remote sensing system that can perform continuous, accurate and long-term monitoring of human vital signs as we have seen in many Sci-Fi movies. Having an intelligible sensor system that can measure and record key vital signs (such as heart rates and respiration rates) remotely and continuously without touching the patients, for example, can be an invaluable tool for physicians who need to make rapid life-and-death decisions. Such a sensor system can also effectively help physicians and patients making better informed decisions when patients’ long-term vital signs data is available. Therefore, there has been a lot of research activities on developing a non-contact sensor system that can monitor a patient’s vital signs and quickly transmit the information to healthcare professionals. Doppler-based radio-frequency (RF) non-contact vital signs (NCVS) monitoring system are particularly attractive for long term vital signs monitoring because there are no wires, electrodes, wearable devices, nor any contact-based sensors involved so the subjects may not be even aware of the ubiquitous monitoring. In this paper, we will provide a brief review on some latest development on NCVS sensors and compare them against a few novel and intelligent phased-array Doppler-based RF NCVS biosensors we have built in our labs. Some of our NCVS sensor tests were performed within a clutter-free anechoic chamber to mitigate the environmental clutters, while most tests were conducted within the typical Herman-Miller type office cubicle setting to mimic a more practical monitoring environment. Additionally, we will show the measurement data to demonstrate the feasibility of long-term NCVS monitoring. The measured data strongly suggests that our latest phased array NCVS system should be able to perform long-term vital signs monitoring intelligently and robustly, especially for situations where the subject is sleeping without hectic movements nearby. PMID:29140281
Hall, Travis; Lie, Donald Y C; Nguyen, Tam Q; Mayeda, Jill C; Lie, Paul E; Lopez, Jerry; Banister, Ron E
2017-11-15
It has been the dream of many scientists and engineers to realize a non-contact remote sensing system that can perform continuous, accurate and long-term monitoring of human vital signs as we have seen in many Sci-Fi movies. Having an intelligible sensor system that can measure and record key vital signs (such as heart rates and respiration rates) remotely and continuously without touching the patients, for example, can be an invaluable tool for physicians who need to make rapid life-and-death decisions. Such a sensor system can also effectively help physicians and patients making better informed decisions when patients' long-term vital signs data is available. Therefore, there has been a lot of research activities on developing a non-contact sensor system that can monitor a patient's vital signs and quickly transmit the information to healthcare professionals. Doppler-based radio-frequency (RF) non-contact vital signs (NCVS) monitoring system are particularly attractive for long term vital signs monitoring because there are no wires, electrodes, wearable devices, nor any contact-based sensors involved so the subjects may not be even aware of the ubiquitous monitoring. In this paper, we will provide a brief review on some latest development on NCVS sensors and compare them against a few novel and intelligent phased-array Doppler-based RF NCVS biosensors we have built in our labs. Some of our NCVS sensor tests were performed within a clutter-free anechoic chamber to mitigate the environmental clutters, while most tests were conducted within the typical Herman-Miller type office cubicle setting to mimic a more practical monitoring environment. Additionally, we will show the measurement data to demonstrate the feasibility of long-term NCVS monitoring. The measured data strongly suggests that our latest phased array NCVS system should be able to perform long-term vital signs monitoring intelligently and robustly, especially for situations where the subject is sleeping without hectic movements nearby.
Advanced Ground Systems Maintenance Intelligent Devices/Smart Sensors Project
NASA Technical Reports Server (NTRS)
Perotti, Jose M. (Compiler)
2015-01-01
This project provides development and qualification of Smart Sensors capable of self-diagnosis and assessment of their capability/readiness to support operations. These sensors will provide pressure and temperature measurements for use in ground systems.
Self-powered Real-time Movement Monitoring Sensor Using Triboelectric Nanogenerator Technology.
Jin, Liangmin; Tao, Juan; Bao, Rongrong; Sun, Li; Pan, Caofeng
2017-09-05
The triboelectric nanogenerator (TENG) has great potential in the field of self-powered sensor fabrication. Recently, smart electronic devices and movement monitoring sensors have attracted the attention of scientists because of their application in the field of artificial intelligence. In this article, a TENG finger movement monitoring, self-powered sensor has been designed and analysed. Under finger movements, the TENG realizes the contact and separation to convert the mechanical energy into electrical signal. A pulse output current of 7.8 μA is generated by the bending and straightening motions of the artificial finger. The optimal output power can be realized when the external resistance is approximately 30 MΩ. The random motions of the finger are detected by the system with multiple TENG sensors in series. This type of flexible and self-powered sensor has potential applications in artificial intelligence and robot manufacturing.
A heuristic for deriving the optimal number and placement of reconnaissance sensors
NASA Astrophysics Data System (ADS)
Nanda, S.; Weeks, J.; Archer, M.
2008-04-01
A key to mastering asymmetric warfare is the acquisition of accurate intelligence on adversaries and their assets in urban and open battlefields. To achieve this, one needs adequate numbers of tactical sensors placed in locations to optimize coverage, where optimality is realized by covering a given area of interest with the least number of sensors, or covering the largest possible subsection of an area of interest with a fixed set of sensors. Unfortunately, neither problem admits a polynomial time algorithm as a solution, and therefore, the placement of such sensors must utilize intelligent heuristics instead. In this paper, we present a scheme implemented on parallel SIMD processing architectures to yield significantly faster results, and that is highly scalable with respect to dynamic changes in the area of interest. Furthermore, the solution to the first problem immediately translates to serve as a solution to the latter if and when any sensors are rendered inoperable.
A survey on bio inspired meta heuristic based clustering protocols for wireless sensor networks
NASA Astrophysics Data System (ADS)
Datta, A.; Nandakumar, S.
2017-11-01
Recent studies have shown that utilizing a mobile sink to harvest and carry data from a Wireless Sensor Network (WSN) can improve network operational efficiency as well as maintain uniform energy consumption by the sensor nodes in the network. Due to Sink mobility, the path between two sensor nodes continuously changes and this has a profound effect on the operational longevity of the network and a need arises for a protocol which utilizes minimal resources in maintaining routes between the mobile sink and the sensor nodes. Swarm Intelligence based techniques inspired by the foraging behavior of ants, termites and honey bees can be artificially simulated and utilized to solve real wireless network problems. The author presents a brief survey on various bio inspired swarm intelligence based protocols used in routing data in wireless sensor networks while outlining their general principle and operation.
Pan, Leilei; Yang, Simon X
2007-12-01
This paper introduces a new portable intelligent electronic nose system developed especially for measuring and analysing livestock and poultry farm odours. It can be used in both laboratory and field. The sensor array of the proposed electronic nose consists of 14 gas sensors, a humidity sensor, and a temperature sensor. The gas sensors were especially selected for the main compounds from the livestock farm odours. An expert system called "Odour Expert" was developed to support researchers' and farmers' decision making on odour control strategies for livestock and poultry operations. "Odour Expert" utilises several advanced artificial intelligence technologies tailored to livestock and poultry farm odours. It can provide more advanced odour analysis than existing commercially available products. In addition, a rank of odour generation factors is provided, which refines the focus of odour control research. Field experiments were conducted downwind from the barns on 14 livestock and poultry farms. Experimental results show that the predicted odour strengths by the electronic nose yield higher consistency in comparison to the perceived odour intensity by human panel. The "Odour Expert" is a useful tool for assisting farmers' odour management practises.
New hydrologic instrumentation in the U.S. Geological Survey
Latkovich, V.J.; Shope, W.G.; ,
1991-01-01
New water-level sensing and recording instrumentation is being used by the U.S. Geological Survey for monitoring water levels, stream velocities, and water-quality characteristics. Several of these instruments are briefly described. The Basic Data Recorder (BDR) is an electronic data logger, that interfaces to sensor systems through a serial-digital interface standard (SDI-12), which was proposed by the data-logger industry; the Incremental Shaft Encoder is an intelligent water-level sensor, which interfaces to the BDR through the SDI-12; the Pressure Sensor is an intelligent, nonsubmersible pressure sensor, which interfaces to the BDR through the SDI-12 and monitors water levels from 0 to 50 feet; the Ultrasonic Velocity Meter is an intelligent, water-velocity sensor, which interfaces to the BDR through the SDI-12 and measures the velocity across a stream up to 500 feet in width; the Collapsible Hand Sampler can be collapsed for insertion through holes in the ice and opened under the ice to collect a water sample; the Lighweight Ice Auger, weighing only 32 pounds, can auger 6- and 8-inch holes through approximately 3.5 feet of ice; and the Ice Chisel has a specially hardened steel blade and 6-foot long, hickory D-handle.
Assessment of COTS IR image simulation tools for ATR development
NASA Astrophysics Data System (ADS)
Seidel, Heiko; Stahl, Christoph; Bjerkeli, Frode; Skaaren-Fystro, Paal
2005-05-01
Following the tendency of increased use of imaging sensors in military aircraft, future fighter pilots will need onboard artificial intelligence e.g. ATR for aiding them in image interpretation and target designation. The European Aeronautic Defence and Space Company (EADS) in Germany has developed an advanced method for automatic target recognition (ATR) which is based on adaptive neural networks. This ATR method can assist the crew of military aircraft like the Eurofighter in sensor image monitoring and thereby reduce the workload in the cockpit and increase the mission efficiency. The EADS ATR approach can be adapted for imagery of visual, infrared and SAR sensors because of the training-based classifiers of the ATR method. For the optimal adaptation of these classifiers they have to be trained with appropriate and sufficient image data. The training images must show the target objects from different aspect angles, ranges, environmental conditions, etc. Incomplete training sets lead to a degradation of classifier performance. Additionally, ground truth information i.e. scenario conditions like class type and position of targets is necessary for the optimal adaptation of the ATR method. In Summer 2003, EADS started a cooperation with Kongsberg Defence & Aerospace (KDA) from Norway. The EADS/KDA approach is to provide additional image data sets for training-based ATR through IR image simulation. The joint study aims to investigate the benefits of enhancing incomplete training sets for classifier adaptation by simulated synthetic imagery. EADS/KDA identified the requirements of a commercial-off-the-shelf IR simulation tool capable of delivering appropriate synthetic imagery for ATR development. A market study of available IR simulation tools and suppliers was performed. After that the most promising tool was benchmarked according to several criteria e.g. thermal emission model, sensor model, targets model, non-radiometric image features etc., resulting in a recommendation. The synthetic image data that are used for the investigation are generated using the recommended tool. Within the scope of this study, ATR performance on IR imagery using classifiers trained on real, synthetic and mixed image sets was evaluated. The performance of the adapted classifiers is assessed using recorded IR imagery with known ground-truth and recommendations are given for the use of COTS IR image simulation tools for ATR development.
Research of home energy management system based on technology of PLC and ZigBee
NASA Astrophysics Data System (ADS)
Wei, Qi; Shen, Jiaojiao
2015-12-01
In view of the problem of saving effectively energy and energy management in home, this paper designs a home energy intelligent control system based on power line carrier communication and wireless ZigBee sensor networks. The system is based on ARM controller, power line carrier communication and wireless ZigBee sensor network as the terminal communication mode, and realizes the centralized and intelligent control of home appliances. Through the combination of these two technologies, the advantages of the two technologies complement each other, and provide a feasible plan for the construction of energy-efficient, intelligent home energy management system.
Introduction to the Special Issue on "State-of-the-Art Sensor Technology in Japan 2015".
Tokumitsu, Masahiro; Ishida, Yoshiteru
2016-08-23
This Special Issue, "State-of-the-Art Sensor Technology in Japan 2015", collected papers on different kinds of sensing technology: fundamental technology for intelligent sensors, information processing for monitoring humans, and information processing for adaptive and survivable sensor systems.[...].
a Comparison of Empirical and Inteligent Methods for Dust Detection Using Modis Satellite Data
NASA Astrophysics Data System (ADS)
Shahrisvand, M.; Akhoondzadeh, M.
2013-09-01
Nowadays, dust storm in one of the most important natural hazards which is considered as a national concern in scientific communities. This paper considers the capabilities of some classical and intelligent methods for dust detection from satellite imagery around the Middle East region. In the study of dust detection, MODIS images have been a good candidate due to their suitable spectral and temporal resolution. In this study, physical-based and intelligent methods including decision tree, ANN (Artificial Neural Network) and SVM (Support Vector Machine) have been applied to detect dust storms. Among the mentioned approaches, in this paper, SVM method has been implemented for the first time in domain of dust detection studies. Finally, AOD (Aerosol Optical Depth) images, which are one the referenced standard products of OMI (Ozone Monitoring Instrument) sensor, have been used to assess the accuracy of all the implemented methods. Since the SVM method can distinguish dust storm over lands and oceans simultaneously, therefore the accuracy of SVM method is achieved better than the other applied approaches. As a conclusion, this paper shows that SVM can be a powerful tool for production of dust images with remarkable accuracy in comparison with AOT (Aerosol Optical Thickness) product of NASA.
Robust Hidden Markov Model based intelligent blood vessel detection of fundus images.
Hassan, Mehdi; Amin, Muhammad; Murtza, Iqbal; Khan, Asifullah; Chaudhry, Asmatullah
2017-11-01
In this paper, we consider the challenging problem of detecting retinal vessel networks. Precise detection of retinal vessel networks is vital for accurate eye disease diagnosis. Most of the blood vessel tracking techniques may not properly track vessels in presence of vessels' occlusion. Owing to problem in sensor resolution or acquisition of fundus images, it is possible that some part of vessel may occlude. In this scenario, it becomes a challenging task to accurately trace these vital vessels. For this purpose, we have proposed a new robust and intelligent retinal vessel detection technique on Hidden Markov Model. The proposed model is able to successfully track vessels in the presence of occlusion. The effectiveness of the proposed technique is evaluated on publically available standard DRIVE dataset of the fundus images. The experiments show that the proposed technique not only outperforms the other state of the art methodologies of retinal blood vessels segmentation, but it is also capable of accurate occlusion handling in retinal vessel networks. The proposed technique offers better average classification accuracy, sensitivity, specificity, and area under the curve (AUC) of 95.7%, 81.0%, 97.0%, and 90.0% respectively, which shows the usefulness of the proposed technique. Copyright © 2017 Elsevier B.V. All rights reserved.
Intelligence Fusion Modeling. A Proposed Approach.
1983-09-16
based techniques developed by artificial intelligence researchers. This paper describes the application of these techniques in the modeling of an... intelligence requirements, although the methods presented are applicable . We treat PIR/IR as given. -7- -- -W V"W v* 1.- . :71.,v It k*~ ~-- Movement...items from the PIR/IR/HVT decomposition are received from the CMDS. Formatted tactical intelligence reports are received from sensors of like types
Bluetooth-based distributed measurement system
NASA Astrophysics Data System (ADS)
Tang, Baoping; Chen, Zhuo; Wei, Yuguo; Qin, Xiaofeng
2007-07-01
A novel distributed wireless measurement system, which is consisted of a base station, wireless intelligent sensors and relay nodes etc, is established by combining of Bluetooth-based wireless transmission, virtual instrument, intelligent sensor, and network. The intelligent sensors mounted on the equipments to be measured acquire various parameters and the Bluetooth relay nodes get the acquired data modulated and sent to the base station, where data analysis and processing are done so that the operational condition of the equipment can be evaluated. The establishment of the distributed measurement system is discussed with a measurement flow chart for the distributed measurement system based on Bluetooth technology, and the advantages and disadvantages of the system are analyzed at the end of the paper and the measurement system has successfully been used in Daqing oilfield, China for measurement of parameters, such as temperature, flow rate and oil pressure at an electromotor-pump unit.
NASA Astrophysics Data System (ADS)
Rahman, Husna Abdul; Harun, Sulaiman Wadi; Arof, Hamzah; Irawati, Ninik; Musirin, Ismail; Ibrahim, Fatimah; Ahmad, Harith
2014-05-01
An enhanced dental cavity diameter measurement mechanism using an intensity-modulated fiber optic displacement sensor (FODS) scanning and imaging system, fuzzy logic as well as a single-layer perceptron (SLP) neural network, is presented. The SLP network was employed for the classification of the reflected signals, which were obtained from the surfaces of teeth samples and captured using FODS. Two features were used for the classification of the reflected signals with one of them being the output of a fuzzy logic. The test results showed that the combined fuzzy logic and SLP network methodology contributed to a 100% classification accuracy of the network. The high-classification accuracy significantly demonstrates the suitability of the proposed features and classification using SLP networks for classifying the reflected signals from teeth surfaces, enabling the sensor to accurately measure small diameters of tooth cavity of up to 0.6 mm. The method remains simple enough to allow its easy integration in existing dental restoration support systems.
Improving CAR Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Improving Car Navigation with a Vision-Based System
NASA Astrophysics Data System (ADS)
Kim, H.; Choi, K.; Lee, I.
2015-08-01
The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.
Human grasping database for activities of daily living with depth, color and kinematic data streams.
Saudabayev, Artur; Rysbek, Zhanibek; Khassenova, Raykhan; Varol, Huseyin Atakan
2018-05-29
This paper presents a grasping database collected from multiple human subjects for activities of daily living in unstructured environments. The main strength of this database is the use of three different sensing modalities: color images from a head-mounted action camera, distance data from a depth sensor on the dominant arm and upper body kinematic data acquired from an inertial motion capture suit. 3826 grasps were identified in the data collected during 9-hours of experiments. The grasps were grouped according to a hierarchical taxonomy into 35 different grasp types. The database contains information related to each grasp and associated sensor data acquired from the three sensor modalities. We also provide our data annotation software written in Matlab as an open-source tool. The size of the database is 172 GB. We believe this database can be used as a stepping stone to develop big data and machine learning techniques for grasping and manipulation with potential applications in rehabilitation robotics and intelligent automation.
Rahman, Husna Abdul; Harun, Sulaiman Wadi; Arof, Hamzah; Irawati, Ninik; Musirin, Ismail; Ibrahim, Fatimah; Ahmad, Harith
2014-05-01
An enhanced dental cavity diameter measurement mechanism using an intensity-modulated fiber optic displacement sensor (FODS) scanning and imaging system, fuzzy logic as well as a single-layer perceptron (SLP) neural network, is presented. The SLP network was employed for the classification of the reflected signals, which were obtained from the surfaces of teeth samples and captured using FODS. Two features were used for the classification of the reflected signals with one of them being the output of a fuzzy logic. The test results showed that the combined fuzzy logic and SLP network methodology contributed to a 100% classification accuracy of the network. The high-classification accuracy significantly demonstrates the suitability of the proposed features and classification using SLP networks for classifying the reflected signals from teeth surfaces, enabling the sensor to accurately measure small diameters of tooth cavity of up to 0.6 mm. The method remains simple enough to allow its easy integration in existing dental restoration support systems.
NASA Astrophysics Data System (ADS)
Parhad, Ashutosh
Intelligent transportation systems use in-pavement inductive loop sensors to collect real time traffic data. This method is very expensive in terms of installation and maintenance. Our research is focused on developing advanced algorithms capable of generating high amounts of energy that can charge a battery. This electromechanical energy conversion is an optimal way of energy scavenging that makes use of piezoelectric sensors. The power generated is sufficient to run the vehicle detection module that has several sensors embedded together. To achieve these goals, we have developed a simulation module using software's like LabVIEW and Multisim. The simulation module recreates a practical scenario that takes into consideration vehicle weight, speed, wheel width and frequency of the traffic.
Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network.
Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R
2016-08-15
Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors.
Intelligent fiber optic sensor for solution concentration examination
NASA Astrophysics Data System (ADS)
Borecki, Michal; Kruszewski, Jerzy
2003-09-01
This paper presents the working principles of intelligent fiber-optic intensity sensor used for solution concentration examination. The sensor head is the ending of the large core polymer optical fiber. The head works on the reflection intensity basis. The reflected signal level depends on Fresnel reflection and reflection on suspended matter when the head is submersed in solution. The sensor head is mounted on a lift. For detection purposes the signal includes head submerging, submersion, emerging and emergence is measured. This way the viscosity turbidity and refraction coefficient has an effect on measured signal. The signal forthcoming from head is processed electrically in opto-electronic interface. Then it is feed to neural network. The novelty of presented sensor is implementation of neural network that works in generalization mode. The sensor resolution depends on opto-electronic signal conversion precision and neural network learning accuracy. Therefore, the number and quality of points used for learning process is very important. The example sensor application for examination of liquid soap concentration in water is presented in the paper.
Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network
Fernandez, Susel; Hadfi, Rafik; Ito, Takayuki; Marsa-Maestre, Ivan; Velasco, Juan R.
2016-01-01
Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors. PMID:27537878
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Chen, Alexander Y. K.
1991-01-01
Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two articulated arms, one movable robot head, and two charged coupled device (CCD) cameras for producing the stereoscopic views, and articulated cylindrical-type lower body, and an optional mobile base. A functional prototype is demonstrated.
ERIC Educational Resources Information Center
Kanagarajan, Sujith; Ramakrishnan, Sivakumar
2018-01-01
Ubiquitous Learning Environment (ULE) has been becoming a mobile and sensor based technology equipped environment that suits the modern world education discipline requirements for the past few years. Ambient Intelligence (AmI) makes much smarter the ULE by the support of optimization and intelligent techniques. Various efforts have been so far…
Study on intelligent processing system of man-machine interactive garment frame model
NASA Astrophysics Data System (ADS)
Chen, Shuwang; Yin, Xiaowei; Chang, Ruijiang; Pan, Peiyun; Wang, Xuedi; Shi, Shuze; Wei, Zhongqian
2018-05-01
A man-machine interactive garment frame model intelligent processing system is studied in this paper. The system consists of several sensor device, voice processing module, mechanical parts and data centralized acquisition devices. The sensor device is used to collect information on the environment changes brought by the body near the clothes frame model, the data collection device is used to collect the information of the environment change induced by the sensor device, voice processing module is used for speech recognition of nonspecific person to achieve human-machine interaction, mechanical moving parts are used to make corresponding mechanical responses to the information processed by data collection device.it is connected with data acquisition device by a means of one-way connection. There is a one-way connection between sensor device and data collection device, two-way connection between data acquisition device and voice processing module. The data collection device is one-way connection with mechanical movement parts. The intelligent processing system can judge whether it needs to interact with the customer, realize the man-machine interaction instead of the current rigid frame model.
NASA Astrophysics Data System (ADS)
Konstantinidis, A.; Anaxagoras, T.; Esposito, M.; Allinson, N.; Speller, R.
2012-03-01
X-ray diffraction studies are used to identify specific materials. Several laboratory-based x-ray diffraction studies were made for breast cancer diagnosis. Ideally a large area, low noise, linear and wide dynamic range digital x-ray detector is required to perform x-ray diffraction measurements. Recently, digital detectors based on Complementary Metal-Oxide- Semiconductor (CMOS) Active Pixel Sensor (APS) technology have been used in x-ray diffraction studies. Two APS detectors, namely Vanilla and Large Area Sensor (LAS), were developed by the Multidimensional Integrated Intelligent Imaging (MI-3) consortium to cover a range of scientific applications including x-ray diffraction. The MI-3 Plus consortium developed a novel large area APS, named as Dynamically Adjustable Medical Imaging Technology (DynAMITe), to combine the key characteristics of Vanilla and LAS with a number of extra features. The active area (12.8 × 13.1 cm2) of DynaMITe offers the ability of angle dispersive x-ray diffraction (ADXRD). The current study demonstrates the feasibility of using DynaMITe for breast cancer diagnosis by identifying six breast-equivalent plastics. Further work will be done to optimize the system in order to perform ADXRD for identification of suspicious areas of breast tissue following a conventional mammogram taken with the same sensor.
Appearance-based multimodal human tracking and identification for healthcare in the digital home.
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-08-05
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.
NASA Astrophysics Data System (ADS)
De Leon, Marlene M.; Estuar, Maria Regina E.; Lim, Hadrian Paulo; Victorino, John Noel C.; Co, Jerelyn; Saddi, Ivan Lester; Paelmo, Sharlene Mae; Dela Cruz, Bon Lemuel
2017-09-01
Environment and agriculture related applications have been gaining ground for the past several years and have been the context for researches in ubiquitous and pervasive computing. This study is a part of a bigger study that uses artificial intelligence in developing models to detect, monitor, and forecast the spread of Fusarium oxysporum cubense TR4 (FOC TR4) on Cavendish bananas cultivated in the Philippines. To implement an Intelligent Farming system, 1) wireless sensor nodes (WSNs) are deployed in Philippine banana plantations to collect soil parameter data that is considered to affect the health of Cavendish bananas, 2) a custom built smartphone application is used for collecting, storing, and transmitting soil data, plant images and plant status data to a cloud storage, and 3) a custom built web application is used to load and display results of physico-chemical analysis of soil, analysis of data models, and geographic locations of plants being monitored. This study discusses the issues, considerations, and solutions implemented in the development of an asynchronous communication channel to ensure that all data collected by WSNs and smartphone applications are transmitted with a high degree of accuracy and reliability. From a design standpoint: standard API documentation on usage of data type is required to avoid inconsistencies in parameter passing. From a technical standpoint, there is a need to include error-handling mechanisms especially for delays in transmission of data as well as generalize method of parsing thru multidimensional array of data. Strategies are presented in the paper.
Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home
Yang, Mau-Tsuen; Huang, Shen-Yen
2014-01-01
There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home's entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras) using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette) using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare. PMID:25098207
Using generic tool kits to build intelligent systems
NASA Technical Reports Server (NTRS)
Miller, David J.
1994-01-01
The Intelligent Systems and Robots Center at Sandia National Laboratories is developing technologies for the automation of processes associated with environmental remediation and information-driven manufacturing. These technologies, which focus on automated planning and programming and sensor-based and model-based control, are used to build intelligent systems which are able to generate plans of action, program the necessary devices, and use sensors to react to changes in the environment. By automating tasks through the use of programmable devices tied to computer models which are augmented by sensing, requirements for faster, safer, and cheaper systems are being satisfied. However, because of the need for rapid cost-effect prototyping and multi-laboratory teaming, it is also necessary to define a consistent approach to the construction of controllers for such systems. As a result, the Generic Intelligent System Controller (GISC) concept has been developed. This concept promotes the philosophy of producing generic tool kits which can be used and reused to build intelligent control systems.
Smart sensing surveillance system
NASA Astrophysics Data System (ADS)
Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen
2010-04-01
An effective public safety sensor system for heavily-populated applications requires sophisticated and geographically-distributed infrastructures, centralized supervision, and deployment of large-scale security and surveillance networks. Artificial intelligence in sensor systems is a critical design to raise awareness levels, improve the performance of the system and adapt to a changing scenario and environment. In this paper, a highly-distributed, fault-tolerant, and energy-efficient Smart Sensing Surveillance System (S4) is presented to efficiently provide a 24/7 and all weather security operation in crowded environments or restricted areas. Technically, the S4 consists of a number of distributed sensor nodes integrated with specific passive sensors to rapidly collect, process, and disseminate heterogeneous sensor data from near omni-directions. These distributed sensor nodes can cooperatively work to send immediate security information when new objects appear. When the new objects are detected, the S4 will smartly select the available node with a Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR camera to track the objects and capture associated imagery. The S4 provides applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. Other imaging processes can be updated to meet specific requirements and operations. In the S4, all the sensor nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology. This UWB RF technology can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The Service Oriented Architecture of S4 enables remote applications to interact with the S4 network and use the specific presentation methods. In addition, the S4 is compliant with Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE) standards to efficiently discover, access, use, and control heterogeneous sensors and their metadata. These S4 capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. The S4 system is directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
Intelligence Control System for Landfills Based on Wireless Sensor Network
NASA Astrophysics Data System (ADS)
Zhang, Qian; Huang, Chuan; Gong, Jian
2018-06-01
This paper put forward an intelligence system for controlling the landfill gas in landfills to make the landfill gas (LFG) exhaust controllably and actively. The system, which is assigned by the wireless sensor network, were developed and supervised by remote applications in workshop instead of manual work. An automatic valve control depending on the sensor units embedded is installed in tube, the air pressure and concentration of LFG are detected to decide the level of the valve switch. The paper also proposed a modified algorithm to solve transmission problem, so that the system can keep a high efficiency and long service life.
Extended Logic Intelligent Processing System for a Sensor Fusion Processor Hardware
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Thomas, Tyson; Li, Wei-Te; Daud, Taher; Fabunmi, James
2000-01-01
The paper presents the hardware implementation and initial tests from a low-power, highspeed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) is described, which combines rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor signals in compact low power VLSI. The development of the ELIPS concept is being done to demonstrate the interceptor functionality which particularly underlines the high speed and low power requirements. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Processing speeds of microseconds have been demonstrated using our test hardware.
a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images
NASA Astrophysics Data System (ADS)
Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.
2015-07-01
Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.
Conception d'un capteur intelligent pour la détection des vapeurs de styrène dans l'industrie
NASA Astrophysics Data System (ADS)
Agbossou, Kodjo; Agbebavi, T. James; Koffi, Demagna; Elhiri, Mohammed
1994-10-01
The techniques of measurement of toxic gases are nowadays based on the semiconductor type sensors. The modelling and the electronic processing of their signals can be used to improve the accuracy and the efficiency of the measurement. In this paper, an intelligent system using a semiconductor sensor has been designed for the detection of the styrene vapors. A set of the environmental parameters sensors such as the temperature, the pressure and the humidity, is added to the basic sensor and allows a precise detection of the styrene vapors in air. A microcontroller and a communication interface, that are included in the control system and in the data processing system, provide the local intelligence. The linearization routines of the differents sensors are in the memory of the microcontroller. The system made of the sensors, of the amplification circuits, of the microcontroller and of the communication network between the smart sensor and the computer is analysed. A laboratory test of the device is presented and the accuracies and efficiencies of the differents sensors are given. Les techniques fiables de quantification des gaz polluants sont aujourd'hui basées sur l'utilisation des détecteurs à récepteurs chimiques et sur des capteurs à semiconducteurs. La modélisation et le traitement numérique des signaux résultants sont importants pour une mesure efficace et précise dans un milieu donné. Dans cet article, un capteur intelligent, utilisant un détecteur de gaz type semiconducteur a été réalisé pour la détection des vapeurs de styrène. Un ensemble de détecteurs des paramètres environnementaux, tels que la température, la pression et l'humidité, ajoutés au capteur de styrène, permettent de mesurer avec un bon contrôle les vapeurs de styrène dans l'air. Le système de contrôle et de gestion local des données est constitué d'un microcontrôleur et d'une interface de communication. Le microcontrôleur contient dans sa mémoire toutes les fonctions de linéarisation des différents capteurs. Cet ensemble de capteurs, de circuits conditionneurs, de microcontrôleur et d'interface de communication est appelé " capteur intelligent ". Le réseau de communication entre le capteur intelligent et le micro-ordinateur est analysé en terme de traitement de signal. Un exemple d'application au laboratoire est présenté, les sensibilités et les précisions des différents capteurs sont données.
Design of intelligent composites with life-cycle health management capabilities
NASA Astrophysics Data System (ADS)
Rosania, Colleen L.; Larrosa, Cecilia C.; Chang, Fu-Kuo
2015-03-01
Use of carbon fiber reinforced polymers (CFRPs) presents challenges because of their complex manufacturing processes and different damage mechanics in relation to legacy metal materials. New monitoring methods for manufacturing, quality verification, damage estimation, and prognosis are needed to use CFRPs safely and efficiently. This work evaluates the development of intelligent composite materials using integrated piezoelectric sensors to monitor the material during cure and throughout service life. These sensors are used to propagate ultrasonic waves through the structure for health monitoring. During manufacturing, data is collected at different stages during the cure cycle, detecting the changing material properties during cure and verifying quality and degree of cure. The same sensors can then be used with previously developed techniques to perform damage detection, such as impact detection and matrix crack density estimation. Real-time damage estimation can be combined with prognostic models to predict future propagation of damage in the material. In this work experimental results will be presented from composite coupons with embedded piezoelectric sensors. Cure monitoring and damage detection results derived from analysis of the ultrasonic sensor signal will be shown. Sensitive signal parameters to the different stimuli in both the time and frequency domains will be explored for this analysis. From these results, use of the same sensor networks from manufacturing throughout the life of the composite material will demonstrate the full life-cycle monitoring capability of these intelligent materials.
Usaf Space Sensing Cryogenic Considerations
NASA Astrophysics Data System (ADS)
Roush, F.
2010-04-01
Infrared (IR) space sensing missions of the future depend upon low mass components and highly capable imaging technologies. Limitations in visible imaging due to the earth's shadow drive the use of IR surveillance methods for a wide variety of applications for Intelligence, Surveillance, and Reconnaissance (ISR), Ballistic Missile Defense (BMD) applications, and almost certainly in Space Situational Awareness (SSA) and Operationally Responsive Space (ORS) missions. Utilization of IR sensors greatly expands and improves mission capabilities including target and target behavioral discrimination. Background IR emissions and electronic noise that is inherently present in Focal Plane Arrays (FPAs) and surveillance optics bench designs prevents their use unless they are cooled to cryogenic temperatures. This paper describes the role of cryogenic coolers as an enabling technology for generic ISR and BMD missions and provides ISR and BMD mission and requirement planners with a brief glimpse of this critical technology implementation potential. The interaction between cryogenic refrigeration component performance and the IR sensor optics and FPA can be seen as not only mission enabling but also as mission performance enhancing when the refrigeration system is considered as part of an overall optimization problem.
Utilization of extended bayesian networks in decision making under uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Eeckhout, Edward M; Leishman, Deborah A; Gibson, William L
2009-01-01
Bayesian network tool (called IKE for Integrated Knowledge Engine) has been developed to assess the probability of undesirable events. The tool allows indications and observables from sensors and/or intelligence to feed directly into hypotheses of interest, thus allowing one to quantify the probability and uncertainty of these events resulting from very disparate evidence. For example, the probability that a facility is processing nuclear fuel or assembling a weapon can be assessed by examining the processes required, establishing the observables that should be present, then assembling information from intelligence, sensors and other information sources related to the observables. IKE also hasmore » the capability to determine tasking plans, that is, prioritize which observable should be collected next to most quickly ascertain the 'true' state and drive the probability toward 'zero' or 'one.' This optimization capability is called 'evidence marshaling.' One example to be discussed is a denied facility monitoring situation; there is concern that certain process(es) are being executed at the site (due to some intelligence or other data). We will show how additional pieces of evidence will then ascertain with some degree of certainty the likelihood of this process(es) as each piece of evidence is obtained. This example shows how both intelligence and sensor data can be incorporated into the analysis. A second example involves real-time perimeter security. For this demonstration we used seismic, acoustic, and optical sensors linked back to IKE. We show how these sensors identified and assessed the likelihood of 'intruder' versus friendly vehicles.« less
Wireless Sensor Network Based Subsurface Contaminant Plume Monitoring
2012-04-16
Sensor Network (WSN) to monitor contaminant plume movement in naturally heterogeneous subsurface formations to advance the sensor networking based...time to assess the source and predict future plume behavior. This proof-of-concept research aimed at demonstrating the use of an intelligent Wireless
NASA Astrophysics Data System (ADS)
Hoefflinger, Bernd
Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.
Simulation of Smart Home Activity Datasets
Synnott, Jonathan; Nugent, Chris; Jeffers, Paul
2015-01-01
A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation. PMID:26087371
Simulation of Smart Home Activity Datasets.
Synnott, Jonathan; Nugent, Chris; Jeffers, Paul
2015-06-16
A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.
NASA Astrophysics Data System (ADS)
Imaki, Masaharu; Kameyama, Shumpei; Ishimura, Eitaro; Nakaji, Masaharu; Yoshinaga, Hideo; Hirano, Yoshihito
2017-03-01
We developed a line scanning time-of-flight (TOF) laser sensor for an intelligent transport system (ITS), which combines wide field-of-view (FOV) receiving optics of 30 deg and a high-speed microelectro mechanical system scanner of 0.9 ms/line with a simple sensor configuration. The newly developed high-aspect ratio photodiode realizes the scanless and wide FOV receiver. The sinusoidal wave intensity modulation method is used for the TOF measurement. This enables the noise reduction of the trans-impedance amplifier by applying the LC-resonant method. The vehicle detection and axle counting, which are the important functions in ITS, are also demonstrated.
Bluetooth-based sensor networks for remotely monitoring the physiological signals of a patient.
Zhang, Ying; Xiao, Hannan
2009-11-01
Integrating intelligent medical microsensors into a wireless communication network makes it possible to remotely collect physiological signals of a patient, release the patient from being tethered to monitoring medical instrumentations, and facilitate the patient's early hospital discharge. This can further improve life quality by providing continuous observation without the need of disrupting the patient's normal life, thus reducing the risk of infection significantly, and decreasing the cost of the hospital and the patient. This paper discusses the implementation issues, and describes the overall system architecture of our developed Bluetooth sensor network for patient monitoring and the corresponding heart activity sensors. It also presents our approach to developing the intelligent physiological sensor nodes involving integration of Bluetooth radio technology, hardware and software organization, and our solutions for onboard signal processing.
1990-12-01
data rate to the electronics would be much lower on the average and the data much "richer" in information. Intelligent use of...system bottleneck, a high data rate should be provided by I/O systems. 2. machines with intelligent storage management specially designed for logic...management information processing, surveillance sensors, intelligence data collection and handling, solid state sciences, electromagnetics, and propagation, and electronic reliability/maintainability and compatibility.
NASA Technical Reports Server (NTRS)
Ali, Moonis; Whitehead, Bruce; Gupta, Uday K.; Ferber, Harry
1989-01-01
This paper describes an expert system which is designed to perform automatic data analysis, identify anomalous events, and determine the characteristic features of these events. We have employed both artificial intelligence and neural net approaches in the design of this expert system. The artificial intelligence approach is useful because it provides (1) the use of human experts' knowledge of sensor behavior and faulty engine conditions in interpreting data; (2) the use of engine design knowledge and physical sensor locations in establishing relationships among the events of multiple sensors; (3) the use of stored analysis of past data of faulty engine conditions; and (4) the use of knowledge-based reasoning in distinguishing sensor failure from actual faults. The neural network approach appears promising because neural nets (1) can be trained on extremely noisy data and produce classifications which are more robust under noisy conditions than other classification techniques; (2) avoid the necessity of noise removal by digital filtering and therefore avoid the need to make assumptions about frequency bands or other signal characteristics of anomalous behavior; (3) can, in effect, generate their own feature detectors based on the characteristics of the sensor data used in training; and (4) are inherently parallel and therefore are potentially implementable in special-purpose parallel hardware.
Monovision techniques for telerobots
NASA Technical Reports Server (NTRS)
Goode, P. W.; Carnils, K.
1987-01-01
The primary task of the vision sensor in a telerobotic system is to provide information about the position of the system's effector relative to objects of interest in its environment. The subtasks required to perform the primary task include image segmentation, object recognition, and object location and orientation in some coordinate system. The accomplishment of the vision task requires the appropriate processing tools and the system methodology to effectively apply the tools to the subtasks. The functional structure of the telerobotic vision system used in the Langley Research Center's Intelligent Systems Research Laboratory is discussed as well as two monovision techniques for accomplishing the vision subtasks.
Integrated piezoelectric actuators in deep drawing tools
NASA Astrophysics Data System (ADS)
Neugebauer, R.; Mainda, P.; Drossel, W.-G.; Kerschner, M.; Wolf, K.
2011-04-01
The production of car body panels are defective in succession of process fluctuations. Thus the produced car body panel can be precise or damaged. To reduce the error rate, an intelligent deep drawing tool was developed at the Fraunhofer Institute for Machine Tools and Forming Technology IWU in cooperation with Audi and Volkswagen. Mechatronic components in a closed-loop control is the main differentiating factor between an intelligent and a conventional deep drawing tool. In correlation with sensors for process monitoring, the intelligent tool consists of piezoelectric actuators to actuate the deep drawing process. By enabling the usage of sensors and actuators at the die, the forming tool transform to a smart structure. The interface between sensors and actuators will be realized with a closed-loop control. The content of this research will present the experimental results with the piezoelectric actuator. For the analysis a production-oriented forming tool with all automotive requirements were used. The disposed actuators are monolithic multilayer actuators of the piezo injector system. In order to achieve required force, the actuators are combined in a cluster. The cluster is redundant and economical. In addition to the detailed assembly structures, this research will highlight intensive analysis with the intelligent deep drawing tool.
Trust metrics in information fusion
NASA Astrophysics Data System (ADS)
Blasch, Erik
2014-05-01
Trust is an important concept for machine intelligence and is not consistent across many applications. In this paper, we seek to understand trust from a variety of factors: humans, sensors, communications, intelligence processing algorithms and human-machine displays of information. In modeling the various aspects of trust, we provide an example from machine intelligence that supports the various attributes of measuring trust such as sensor accuracy, communication timeliness, machine processing confidence, and display throughput to convey the various attributes that support user acceptance of machine intelligence results. The example used is fusing video and text whereby an analyst needs trust information in the identified imagery track. We use the proportional conflict redistribution rule as an information fusion technique that handles conflicting data from trusted and mistrusted sources. The discussion of the many forms of trust explored in the paper seeks to provide a systems-level design perspective for information fusion trust quantification.
Nature-Inspired Acoustic Sensor Projects
1999-08-24
m). The pager motors are worn on the wrists. Yale Intelligent Sensors Lab 8 Autonomous vehicle navigation Yago – Yale Autonomous Go-Cart Yago is used...proximity sensor determined the presence of close-by objects missed by the sonars. Yago operated autonomously by avoiding obstacles. Problems being
Intelligent optical fiber sensor system for measurement of gas concentration
NASA Astrophysics Data System (ADS)
Pan, Jingming; Yin, Zongmin
1991-08-01
A measuring, controlling, and alarming system for the concentration of a gas or transparent liquid is described. In this system, a Fabry-Perot etalon with an optical fiber is used as the sensor, a charge-coupled device (CCD) is used as the photoelectric converter, and a single- chip microcomputer 8031 along with an interface circuit is used to measure the interference ring signal. The system has such features as real-time and on-line operation, continuous dynamic handling, and intelligent control.
Intelligent Integrated System Health Management
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2012-01-01
Intelligent Integrated System Health Management (ISHM) is the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system (Management: storage, distribution, sharing, maintenance, processing, reasoning, and presentation). Presentation discusses: (1) ISHM Capability Development. (1a) ISHM Knowledge Model. (1b) Standards for ISHM Implementation. (1c) ISHM Domain Models (ISHM-DM's). (1d) Intelligent Sensors and Components. (2) ISHM in Systems Design, Engineering, and Integration. (3) Intelligent Control for ISHM-Enabled Systems
Some recent advances of intelligent health monitoring systems for civil infrastructures in HIT
NASA Astrophysics Data System (ADS)
Ou, Jinping
2005-06-01
The intelligent health monitoring systems more and more become a technique for ensuring the health and safety of civil infrastructures and also an important approach for research of the damage accumulation or even disaster evolving characteristics of civil infrastructures, and attracts prodigious research interests and active development interests of scientists and engineers since a great number of civil infrastructures are planning and building each year in mainland China. In this paper, some recent advances on research, development nad implementation of intelligent health monitoring systems for civil infrastructuresin mainland China, especially in Harbin Institute of Technology (HIT), P.R.China. The main contents include smart sensors such as optical fiber Bragg grating (OFBG) and polivinyllidene fluoride (PVDF) sensors, fatigue life gauges, self-sensing mortar and carbon fiber reinforced polymer (CFRP), wireless sensor networks and their implementation in practical infrastructures such as offshore platform structures, hydraulic engineering structures, large span bridges and large space structures. Finally, the relative research projects supported by the national foundation agencies of China are briefly introduced.
Optical fiber strain sensor for application in intelligent intruder detection systems
NASA Astrophysics Data System (ADS)
Stańczyk, Tomasz; Tenderenda, Tadeusz; Szostkiewicz, Lukasz; Bienkowska, Beata; Kunicki, Daniel; Murawski, Michal; Mergo, Pawel; Nasilowski, Tomasz
2017-10-01
Nowadays technology allows to create highly effective Intruder Detection Systems (IDS), that are able to detect the presence of an intruder within a defined area. In such systems the best performance can be achieved by combining different detection techniques in one system. One group of devices that can be applied in an IDS, are devices based on Fiber Optic Sensors (FOS). The FOS benefits from numerous advantages of optical fibers like: small size, light weight or high sensitivity. In this work we present a novel Microstructured Optical Fiber (MOF) characterized by increased strain sensitivity dedicated to distributed acoustic sensing for intelligent intruder detection systems. By designing the MOF with large air holes in close proximity to a fiber core, we increased the effective refractive index sensitivity to longitudinal strain. The presented fiber can be easily integrated in a floor system in order to detect any movement in the investigated area. We believe that sensors, based on the presented MOF, due to its numerous advantages, can find application in intelligent IDS.
Catalogue Creation for Space Situational Awareness with Optical Sensors
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, I.; Bessell, T.; Rutten, M.; Gordon, N.; Moretti, N.; Morreale, B.
2016-09-01
In order to safeguard the continued use of space-based technologies, effective monitoring and tracking of man-made resident space objects (RSOs) is paramount. The diverse characteristics, behaviours and trajectories of RSOs make space surveillance a challenging application of the discipline that is tracking and surveillance. When surveillance systems are faced with non-canonical scenarios, it is common for human operators to intervene while researchers adapt and extend traditional tracking techniques in search of a solution. A complementary strategy for improving the robustness of space surveillance systems is to place greater emphasis on the anticipation of uncertainty. Namely, give the system the intelligence necessary to autonomously react to unforeseen events and to intelligently and appropriately act on tenuous information rather than discard it. In this paper we build from our 2015 campaign and describe the progression of a low-cost intelligent space surveillance system capable of autonomously cataloguing and maintaining track of RSOs. It currently exploits robotic electro-optical sensors, high-fidelity state-estimation and propagation as well as constrained initial orbit determination (IOD) to intelligently and adaptively manage its sensors in order to maintain an accurate catalogue of RSOs. In a step towards fully autonomous cataloguing, the system has been tasked with maintaining surveillance of a portion of the geosynchronous (GEO) belt. Using a combination of survey and track-refinement modes, the system is capable of maintaining a track of known RSOs and initiating tracks on previously unknown objects. Uniquely, due to the use of high-fidelity representations of a target's state uncertainty, as few as two images of previously unknown RSOs may be used to subsequently initiate autonomous search and reacquisition. To achieve this capability, particularly within the congested environment of the GEO-belt, we use a constrained admissible region (CAR) to generate a plausible estimate of the unknown RSO's state probability density function and disambiguate measurements using a particle-based joint probability data association (JPDA) method. Additionally, the use of alternative CAR generation methods, incorporating catalogue-based priors, is explored and tested. We also present the findings of two field trials of an experimental system that incorporates these techniques. The results demonstrate that such a system is capable of autonomously searching for an RSO that was briefly observed days prior in a GEO-survey and discriminating it from the measurements of other previously catalogued RSOs.
A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge
2016-07-29
Science Foundation (NSF), Department of Defense (DOD), National Institute of Standards and Technology (NIST), Intelligence Community (IC) Introduction...multiple Federal agencies: • Intelligent big data sensors that act autonomously and are programmable via the network for increased flexibility, and... intelligence for scientific discovery enabled by rapid extreme-scale data analysis, capable of understanding and making sense of results and thereby
The United States Army Functional Concept for Intelligence, 2016-2028
2010-10-13
Intelligence improvement strategies historically addressed the changing operational environment by creating sensors and analytical systems designed to locate...hierarchical centrally- directed combat formations and predict their actions in high-intensity conflict. These strategies assumed that intelligence...4) U.S. operations can be derailed over time through a strategy of exhaustion. (5) U.S. forces distributed over wide areas can be
Sensory grammars for sensor networks
Aloimonos, Yiannis
2009-01-01
One of the major goals of Ambient Intelligence and Smart Environments is to interpret human activity sensed by a variety of sensors. In order to develop useful technologies and a subsequent industry around smart environments, we need to proceed in a principled manner. This paper suggests that human activity can be expressed in a language. This is a special language with its own phonemes, its own morphemes (words) and its own syntax and it can be learned using machine learning techniques applied to gargantuan amounts of data collected by sensor networks. Developing such languages will create bridges between Ambient Intelligence and other disciplines. It will also provide a hierarchical structure that can lead to a successful industry. PMID:21897837
Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.
Herrero, David; Martínez, Humberto
2011-01-01
This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
In previous work by the author, effective persistent and pervasive sensing for recognition and tracking of battlefield targets were seen to be achieved, using intelligent algorithms implemented by distributed mobile agents over a composite system of unmanned aerial vehicles (UAVs) for persistence and a wireless network of unattended ground sensors for pervasive coverage of the mission environment. While simulated performance results for the supervised algorithms of the composite system are shown to provide satisfactory target recognition over relatively brief periods of system operation, this performance can degrade by as much as 50% as target dynamics in the environment evolve beyond the period of system operation in which the training data are representative. To overcome this limitation, this paper applies the distributed approach using mobile agents to the network of ground-based wireless sensors alone, without the UAV subsystem, to provide persistent as well as pervasive sensing for target recognition and tracking. The supervised algorithms used in the earlier work are supplanted by unsupervised routines, including competitive-learning neural networks (CLNNs) and new versions of support vector machines (SVMs) for characterization of an unknown target environment. To capture the same physical phenomena from battlefield targets as the composite system, the suite of ground-based sensors can be expanded to include imaging and video capabilities. The spatial density of deployed sensor nodes is increased to allow more precise ground-based location and tracking of detected targets by active nodes. The "swarm" mobile agents enabling WSN intelligence are organized in a three processing stages: detection, recognition and sustained tracking of ground targets. Features formed from the compressed sensor data are down-selected according to an information-theoretic algorithm that reduces redundancy within the feature set, reducing the dimension of samples used in the target recognition and tracking routines. Target tracking is based on simplified versions of Kalman filtration. Accuracy of recognition and tracking of implemented versions of the proposed suite of unsupervised algorithms is somewhat degraded from the ideal. Target recognition and tracking by supervised routines and by unsupervised SVM and CLNN routines in the ground-based WSN is evaluated in simulations using published system values and sensor data from vehicular targets in ground-surveillance scenarios. Results are compared with previously published performance for the system of the ground-based sensor network (GSN) and UAV swarm.
Latest Sensors and Data Acquisition Development Efforts at KSC
NASA Technical Reports Server (NTRS)
Perotti, Jose M.
2002-01-01
This viewgraph presentation summarizes the characteristics required on sensors by consumers desiring access to space, a long term plan developed at KSC (Kennedy Space Center) to identify promising technologies for NASA's own future sensor needs, and the characteristics of several smart sensors already developed. Also addressed are the computer hardware and architecture used to operate sensors, and generic testing capabilities. Consumers desire sensors which are lightweight, inexpensive, intelligent, and easy to use.
Multipurpose active pixel sensor (APS)-based microtracker
NASA Astrophysics Data System (ADS)
Eisenman, Allan R.; Liebe, Carl C.; Zhu, David Q.
1998-12-01
A new, photon-sensitive, imaging array, the active pixel sensor (APS) has emerged as a competitor to the CCD imager for use in star and target trackers. The Jet Propulsion Laboratory (JPL) has undertaken a program to develop a new generation, highly integrated, APS-based, multipurpose tracker: the Programmable Intelligent Microtracker (PIM). The supporting hardware used in the PIM has been carefully selected to enhance the inherent advantages of the APS. Adequate computation power is included to perform star identification, star tracking, attitude determination, space docking, feature tracking, descent imaging for landing control, and target tracking capabilities. Its first version uses a JPL developed 256 X 256-pixel APS and an advanced 32-bit RISC microcontroller. By taking advantage of the unique features of the APS/microcontroller combination, the microtracker will achieve about an order-of-magnitude reduction in mass and power consumption compared to present state-of-the-art star trackers. It will also add the advantage of programmability to enable it to perform a variety of star, other celestial body, and target tracking tasks. The PIM is already proving the usefulness of its design concept for space applications. It is demonstrating the effectiveness of taking such an integrated approach in building a new generation of high performance, general purpose, tracking instruments to be applied to a large variety of future space missions.
High-efficient Unmanned Aircraft System Operations for Ecosystem Assessment
NASA Astrophysics Data System (ADS)
Xu, H.; Zhang, H.
2016-02-01
Diverse national and international agencies support the idea that incorporating Unmanned Aircraft Systems (UAS) into ecosystem assessment will improve the operations efficiency and accuracy. In this paper, a UAS will be designed to monitor the Gulf of Mexico's coastal area ecosystems intelligently and routinely. UAS onboard sensors will capture information that can be utilized to detect and geo-locate areas affected by invasive grasses. Moreover, practical ecosystem will be better assessed by analyzing the collected information. Compared with human-based/satellite-based surveillance, the proposed strategy is more efficient and accurate, and eliminates limitations and risks associated with human factors. State of the art UAS onboard sensors (e.g. high-resolution electro optical camera, night vision camera, thermal sensor etc.) will be used for monitoring coastal ecosystems. Once detected the potential risk in ecosystem, the onboard GPS data will be used to geo-locate and to store the exact coordinates of the affected area. Moreover, the UAS sensors will be used to observe and to record the daily evolution of coastal ecosystems. Further, benefitting from the data collected by the UAS, an intelligent big data processing scheme will be created to assess the ecosystem evolution effectively. Meanwhile, a cost-efficient intelligent autonomous navigation strategy will be implemented into the UAS, in order to guarantee that the UAS can fly over designated areas, and collect significant data in a safe and effective way. Furthermore, the proposed UAS-based ecosystem surveillance and assessment methodologies can be utilized for natural resources conservation. Flying UAS with multiple state of the art sensors will monitor and report the actual state of high importance natural resources frequently. Using the collected data, the ecosystem conservation strategy can be performed effectively and intelligently.
NASA Astrophysics Data System (ADS)
Helt, Paul J.; DuBois, Dennis
2012-06-01
The scope and success of Department of Defense (DoD), Intelligence Community (IC), and other United States Government (USG) Unattended Sensor efforts clearly requires a focused, expanded effort at integration and best practices development. Discussions with key stakeholders indicate strong support for the standup of an Unattended Sensors Community of Interest (USCOI) to improve visibility across programs and foster greater mission partnerships. The USCOI will advance understanding of legal and privacy issues, data sharing, Intelligence Surveillance and Reconnaissance (ISR) architecture integration standards, technical specifications, and other issues of common interest (power management, concealment, etc.); promoting opportunities for cost avoidance and improved effectiveness. Launching the USCOI concept provides a stakeholders' forum to identify, discuss, and facilitate Doctrine, Organization, Training, Material, Leadership, Personnel and Facilities (DOTMLPF) resolutions to issues related to the acquisition, development, and employment of unattended sensors across the DoD, IC, and broader USG, as appropriate. USCOI also provides a structured forum where appropriately cleared law enforcement can share operational and technical experiences and expertise on integration of unattended sensor data in existing/planned collection and analysis architectures. The USCOI concept can help facilitate extensive independent discussion across the community and enhance subject matter expert and leadership perspectives. Many senior leaders believe DoD and the IC have a great deal to learn from law enforcement use of unattended sensors and data integration. The USCOI will broaden awareness and open dialogue among users and developers across the interagency, law enforcement, and academia.
Autonomous caregiver following robotic wheelchair
NASA Astrophysics Data System (ADS)
Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary
2011-12-01
In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.
ULTRA-LOW POWER CO2 SENSOR FOR INTELLIGENT BUILDING CONTROL - PHASE I
The proposed EPA SBIR Phase I program will create a novel ultra-low power and low-cost microfabricated CO2 sensor. The initial developments of sensor technology will serve the very large Demand Controlled Ventilation market that has been identified by KWJ and its...
Applying Sensor Networks to Evaluate Air Pollutant Emissions from Fugitive and Area Sources
This is a presentation to be given at Duke University's Wireless Intelligent Sensor Network workshop on June 5, 2013. The presentation discusses the evaluation of a low cost carbon monoxide sensor network applied at a recent forest fire study and also evaluated against a referen...
Research of infrared laser based pavement imaging and crack detection
NASA Astrophysics Data System (ADS)
Hong, Hanyu; Wang, Shu; Zhang, Xiuhua; Jing, Genqiang
2013-08-01
Road crack detection is seriously affected by many factors in actual applications, such as some shadows, road signs, oil stains, high frequency noise and so on. Due to these factors, the current crack detection methods can not distinguish the cracks in complex scenes. In order to solve this problem, a novel method based on infrared laser pavement imaging is proposed. Firstly, single sensor laser pavement imaging system is adopted to obtain pavement images, high power laser line projector is well used to resist various shadows. Secondly, the crack extraction algorithm which has merged multiple features intelligently is proposed to extract crack information. In this step, the non-negative feature and contrast feature are used to extract the basic crack information, and circular projection based on linearity feature is applied to enhance the crack area and eliminate noise. A series of experiments have been performed to test the proposed method, which shows that the proposed automatic extraction method is effective and advanced.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
Hopfield neural network and optical fiber sensor as intelligent heart rate monitor
NASA Astrophysics Data System (ADS)
Mutter, Kussay Nugamesh
2018-01-01
This paper presents a design and fabrication of an intelligent fiber-optic sensor used for examining and monitoring heart rate activity. It is found in the literature that the use of fiber sensors as heart rate sensor is widely studied. However, the use of smart sensors based on Hopfield neural networks is very low. In this work, the sensor is a three fibers without cladding of about 1 cm, fed by laser light of 1550 nm of wavelength. The sensing portions are mounted with a micro sensitive diaphragm to transfer the pulse pressure on the left radial wrist. The influenced light intensity will be detected by a three photodetectors as inputs into the Hopfield neural network algorithm. The latter is a singlelayer auto-associative memory structure with a same input and output layers. The prior training weights are stored in the net memory for the standard recorded normal heart rate signals. The sensors' heads work on the reflection intensity basis. The novelty here is that the sensor uses a pulse pressure and Hopfield neural network in an integrity approach. The results showed a significant output measurements of heart rate and counting with a plausible error rate.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
...., Alternafuels, Inc., Intelligent Medical Imaging, Inc., and Optimark Data Systems, Inc.; Order of Suspension of... accurate information concerning the securities of Intelligent Medical Imaging, Inc. because it has not..., 1999. The Commission is of the opinion that the public interest and the protection of investors require...
Villarubia, Gabriel; De Paz, Juan F.; Bajo, Javier
2017-01-01
The use of electric bikes (e-bikes) has grown in popularity, especially in large cities where overcrowding and traffic congestion are common. This paper proposes an intelligent engine management system for e-bikes which uses the information collected from sensors to optimize battery energy and time. The intelligent engine management system consists of a built-in network of sensors in the e-bike, which is used for multi-sensor data fusion; the collected data is analysed and fused and on the basis of this information the system can provide the user with optimal and personalized assistance. The user is given recommendations related to battery consumption, sensors, and other parameters associated with the route travelled, such as duration, speed, or variation in altitude. To provide a user with these recommendations, artificial neural networks are used to estimate speed and consumption for each of the segments of a route. These estimates are incorporated into evolutionary algorithms in order to make the optimizations. A comparative analysis of the results obtained has been conducted for when routes were travelled with and without the optimization system. From the experiments, it is evident that the use of an engine management system results in significant energy and time savings. Moreover, user satisfaction increases as the level of assistance adapts to user behavior and the characteristics of the route. PMID:29088087
De La Iglesia, Daniel H; Villarrubia, Gabriel; De Paz, Juan F; Bajo, Javier
2017-10-31
The use of electric bikes (e-bikes) has grown in popularity, especially in large cities where overcrowding and traffic congestion are common. This paper proposes an intelligent engine management system for e-bikes which uses the information collected from sensors to optimize battery energy and time. The intelligent engine management system consists of a built-in network of sensors in the e-bike, which is used for multi-sensor data fusion; the collected data is analysed and fused and on the basis of this information the system can provide the user with optimal and personalized assistance. The user is given recommendations related to battery consumption, sensors, and other parameters associated with the route travelled, such as duration, speed, or variation in altitude. To provide a user with these recommendations, artificial neural networks are used to estimate speed and consumption for each of the segments of a route. These estimates are incorporated into evolutionary algorithms in order to make the optimizations. A comparative analysis of the results obtained has been conducted for when routes were travelled with and without the optimization system. From the experiments, it is evident that the use of an engine management system results in significant energy and time savings. Moreover, user satisfaction increases as the level of assistance adapts to user behavior and the characteristics of the route.
Design of an intelligent instrument for large direct-current measurement
NASA Astrophysics Data System (ADS)
Zhang, Rong; Zhang, Gang; Zhang, Zhipeng
2000-05-01
The principle and structure of an intelligent large direct current measurement is presented in this paper. It is of reflective type and detects signal by employing the high direct current sensor. The single-chip microcomputer of this system provides a powerful function of control and processing and greatly improves the extent of intelligence. The value can be displayed and printed automatically or manually.
Cryosphere Sensor Webs With The Autonomous Sciencecraft Experiment
NASA Astrophysics Data System (ADS)
Scharenbroich, L.; Doggett, T.; Kratz, T.; Castano, R.; Chien, S.; Davies, A. G.; Tran, D.; Mazzoni, D.
2006-12-01
Autonomous sensor-webs are being deployed as part of the Autonomous Sciencecraft Experiment [1], whereby observations using the Hyperion instrument [2] on-board Earth Observing-1 (EO-1 are triggered by either ground sensors or by near-real-time analysis of data from other space-based sensors. In the realm of cryosphere monitoring, one sensor-web has been set up pairing EO-1 with a sensor buoy [3] deployed in Sparkling Lake, one of several lakes in northern Wisconsin monitored by University of Wisconsin's Trout Lake Station. A Support Vector Machine (SVM) classifier was trained on historical thermistor chain data with manually recorded ice-in and ice-out times and used to trigger Hyperion observations of the Trout Lake area during spring thaw and winter freeze in 2005. A second sensor-web is being developed using near-real time sea ice data products, based on Department of Defense meteorological satellites, available from the National Snow and Ice Data Center (NSIDC) [4]. Once operational, this sensor web will trigger Hyperion observations of pre-defined targets in the Arctic and Antarctic where regional resolution data shows sea ice formation or break up. [1] Chien et al. (2005), An autonomous earth-observing sensor-web, IEEE Intelligent Systems, [2] Pearlman et al. (2003), Hyperion, a space-based imaging spectrometer, IEEE Trans. Geosci. Rem. Sens., 41(6), [3] Kratz, T. et al. (in press) Toward a Global Lake Ecological Observatory Network, Proceedings of the Karelian Institute, [4] Cavalieri et al. (1999) Near real-time DMSP SSM/I daily polar gridded sea ice concentrations, National Snow and Ice Data Center. Digital Media.
Intelligent route surveillance
NASA Astrophysics Data System (ADS)
Schoemaker, Robin; Sandbrink, Rody; van Voorthuijsen, Graeme
2009-05-01
Intelligence on abnormal and suspicious behaviour along roads in operational domains is extremely valuable for countering the IED (Improvised Explosive Device) threat. Local sensor networks at strategic spots can gather data for continuous monitoring of daily vehicle activity. Unattended intelligent ground sensor networks use simple sensing nodes, e.g. seismic, magnetic, radar, or acoustic, or combinations of these in one housing. The nodes deliver rudimentary data at any time to be processed with software that filters out the required information. At TNO (Netherlands Organisation for Applied Scientific Research) research has started on how to equip a sensor network with data analysis software to determine whether behaviour is suspicious or not. Furthermore, the nodes should be expendable, if necessary, and be small in size such that they are hard to detect by adversaries. The network should be self-configuring and self-sustaining and should be reliable, efficient, and effective during operational tasks - especially route surveillance - as well as robust in time and space. If data from these networks are combined with data from other remote sensing devices (e.g. UAVs (Unmanned Aerial Vehicles)/aerostats), an even more accurate assessment of the tactical situation is possible. This paper shall focus on the concepts of operation towards a working intelligent route surveillance (IRS) research demonstrator network for monitoring suspicious behaviour in IED sensitive domains.
Vision robot with rotational camera for searching ID tags
NASA Astrophysics Data System (ADS)
Kimura, Nobutaka; Moriya, Toshio
2008-02-01
We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.
A survey of body sensor networks.
Lai, Xiaochen; Liu, Quanli; Wei, Xin; Wang, Wei; Zhou, Guoqiao; Han, Guangyi
2013-04-24
The technology of sensor, pervasive computing, and intelligent information processing is widely used in Body Sensor Networks (BSNs), which are a branch of wireless sensor networks (WSNs). BSNs are playing an increasingly important role in the fields of medical treatment, social welfare and sports, and are changing the way humans use computers. Existing surveys have placed emphasis on the concept and architecture of BSNs, signal acquisition, context-aware sensing, and system technology, while this paper will focus on sensor, data fusion, and network communication. And we will introduce the research status of BSNs, the analysis of hotspots, and future development trends, the discussion of major challenges and technical problems facing currently. The typical research projects and practical application of BSNs are introduced as well. BSNs are progressing along the direction of multi-technology integration and intelligence. Although there are still many problems, the future of BSNs is fundamentally promising, profoundly changing the human-machine relationships and improving the quality of people's lives.
Flexible hemispheric microarrays of highly pressure-sensitive sensors based on breath figure method.
Wang, Zhihui; Zhang, Ling; Liu, Jin; Jiang, Hao; Li, Chunzhong
2018-05-30
Recently, flexible pressure sensors featuring high sensitivity, broad sensing range and real-time detection have aroused great attention owing to their crucial role in the development of artificial intelligent devices and healthcare systems. Herein, highly sensitive pressure sensors based on hemisphere-microarray flexible substrates are fabricated via inversely templating honeycomb structures deriving from a facile and static breath figure process. The interlocked and subtle microstructures greatly improve the sensing characteristics and compressibility of the as-prepared pressure sensor, endowing it a sensitivity as high as 196 kPa-1 and a wide pressure sensing range (0-100 kPa), as well as other superior performance, including a lower detection limit of 0.5 Pa, fast response time (<26 ms) and high reversibility (>10 000 cycles). Based on the outstanding sensing performance, the potential capability of our pressure sensor in capturing physiological information and recognizing speech signals has been demonstrated, indicating promising application in wearable and intelligent electronics.
Hernandez, Wilmar
2005-01-01
In the present paper, in order to estimate the response of both a wheel speed sensor and an accelerometer placed in a car under performance tests, robust and optimal multivariable estimation techniques are used. In this case, the disturbances and noises corrupting the relevant information coming from the sensors' outputs are so dangerous that their negative influence on the electrical systems impoverish the general performance of the car. In short, the solution to this problem is a safety related problem that deserves our full attention. Therefore, in order to diminish the negative effects of the disturbances and noises on the car's electrical and electromechanical systems, an optimum observer is used. The experimental results show a satisfactory improvement in the signal-to-noise ratio of the relevant signals and demonstrate the importance of the fusion of several intelligent sensor design techniques when designing the intelligent sensors that today's cars need.
A Survey of Body Sensor Networks
Lai, Xiaochen; Liu, Quanli; Wei, Xin; Wang, Wei; Zhou, Guoqiao; Han, Guangyi
2013-01-01
The technology of sensor, pervasive computing, and intelligent information processing is widely used in Body Sensor Networks (BSNs), which are a branch of wireless sensor networks (WSNs). BSNs are playing an increasingly important role in the fields of medical treatment, social welfare and sports, and are changing the way humans use computers. Existing surveys have placed emphasis on the concept and architecture of BSNs, signal acquisition, context-aware sensing, and system technology, while this paper will focus on sensor, data fusion, and network communication. And we will introduce the research status of BSNs, the analysis of hotspots, and future development trends, the discussion of major challenges and technical problems facing currently. The typical research projects and practical application of BSNs are introduced as well. BSNs are progressing along the direction of multi-technology integration and intelligence. Although there are still many problems, the future of BSNs is fundamentally promising, profoundly changing the human-machine relationships and improving the quality of people's lives. PMID:23615581
Intelligent Software Agents: Sensor Integration and Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulesz, James J; Lee, Ronald W
2013-01-01
Abstract In a post Macondo world the buzzwords are Integrity Management and Incident Response Management. The twin processes are not new but the opportunity to link the two is novel. Intelligent software agents can be used with sensor networks in distributed and centralized computing systems to enhance real-time monitoring of system integrity as well as manage the follow-on incident response to changing, and potentially hazardous, environmental conditions. The software components are embedded at the sensor network nodes in surveillance systems used for monitoring unusual events. When an event occurs, the software agents establish a new concept of operation at themore » sensing node, post the event status to a blackboard for software agents at other nodes to see , and then react quickly and efficiently to monitor the scale of the event. The technology addresses a current challenge in sensor networks that prevents a rapid and efficient response when a sensor measurement indicates that an event has occurred. By using intelligent software agents - which can be stationary or mobile, interact socially, and adapt to changing situations - the technology offers features that are particularly important when systems need to adapt to active circumstances. For example, when a release is detected, the local software agent collaborates with other agents at the node to exercise the appropriate operation, such as: targeted detection, increased detection frequency, decreased detection frequency for other non-alarming sensors, and determination of environmental conditions so that adjacent nodes can be informed that an event is occurring and when it will arrive. The software agents at the nodes can also post the data in a targeted manner, so that agents at other nodes and the command center can exercise appropriate operations to recalibrate the overall sensor network and associated intelligence systems. The paper describes the concepts and provides examples of real-world implementations including the Threat Detection and Analysis System (TDAS) at the International Port of Memphis and the Biological Warning and Incident Characterization System (BWIC) Environmental Monitoring (EM) Component. Technologies developed for these 24/7 operational systems have applications for improved real-time system integrity awareness as well as provide incident response (as needed) for production and field applications.« less
NASA Astrophysics Data System (ADS)
Newman, Andrew J.; Richardson, Casey L.; Kain, Sean M.; Stankiewicz, Paul G.; Guseman, Paul R.; Schreurs, Blake A.; Dunne, Jeffrey A.
2016-05-01
This paper introduces the game of reconnaissance blind multi-chess (RBMC) as a paradigm and test bed for understanding and experimenting with autonomous decision making under uncertainty and in particular managing a network of heterogeneous Intelligence, Surveillance and Reconnaissance (ISR) sensors to maintain situational awareness informing tactical and strategic decision making. The intent is for RBMC to serve as a common reference or challenge problem in fusion and resource management of heterogeneous sensor ensembles across diverse mission areas. We have defined a basic rule set and a framework for creating more complex versions, developed a web-based software realization to serve as an experimentation platform, and developed some initial machine intelligence approaches to playing it.
Recognition of road information using magnetic polarity for intelligent vehicles
NASA Astrophysics Data System (ADS)
Kim, Young-Min; Kim, Tae-Gon; Lim, Young-Cheol; Kim, Kwang-Heon; Baek, Seung-Hun; Kim, Eui-Sun
2005-12-01
For an intelligent vehicle driving which uses magnetic markers and magnetic sensors, it can get every kind of road information while moving the vehicle if we use the code that is encoded with N, S pole direction of makers. If there make it an only aim to move the vehicle, it becomes easy to control the vehicle the more we put markers close. By the way, to recognize the direction of a marker pole it is much better that the markers have no interference each other. To get road information and move the vehicle autonomously, the method of arranging magnetic sensors and algorithm of recognizing the position of the vehicle with those sensors was proposed. The effectiveness of the methods was verified with computer simulation.
Prediction of Sybil attack on WSN using Bayesian network and swarm intelligence
NASA Astrophysics Data System (ADS)
Muraleedharan, Rajani; Ye, Xiang; Osadciw, Lisa Ann
2008-04-01
Security in wireless sensor networks is typically sacrificed or kept minimal due to limited resources such as memory and battery power. Hence, the sensor nodes are prone to Denial-of-service attacks and detecting the threats is crucial in any application. In this paper, the Sybil attack is analyzed and a novel prediction method, combining Bayesian algorithm and Swarm Intelligence (SI) is proposed. Bayesian Networks (BN) is used in representing and reasoning problems, by modeling the elements of uncertainty. The decision from the BN is applied to SI forming an Hybrid Intelligence Scheme (HIS) to re-route the information and disconnecting the malicious nodes in future routes. A performance comparison based on the prediction using HIS vs. Ant System (AS) helps in prioritizing applications where decisions are time-critical.
2012-08-15
Bragg grating ( FBG ) sensors within these composite structures allows one to correlate sensor response features to “critical damage events” within the...material. The unique capabilities of this identification strategy are due to the detailed information obtained from the FBG sensors and the... FBG sensors relate to damage states not merely strain amplitudes. The research objectives of this project were therefore to: demonstrate FBG
Intelligent Data Transfer for Multiple Sensor Networks over a Broad Temperature Range
NASA Technical Reports Server (NTRS)
Krasowski, Michael (Inventor)
2018-01-01
A sensor network may be configured to operate in extreme temperature environments. A sensor may be configured to generate a frequency carrier, and transmit the frequency carrier to a node. The node may be configured to amplitude modulate the frequency carrier, and transmit the amplitude modulated frequency carrier to a receiver.
Global Test Range: Toward Airborne Sensor Webs
NASA Technical Reports Server (NTRS)
Mace, Thomas H.; Freudinger, Larry; DelFrate John H.
2008-01-01
This viewgraph presentation reviews the planned global sensor network that will monitor the Earth's climate, and resources using airborne sensor systems. The vision is an intelligent, affordable Earth Observation System. Global Test Range is a lab developing trustworthy services for airborne instruments - a specialized Internet Service Provider. There is discussion of several current and planned missions.
Hsu, Chia-Cheng; Chen, Hsin-Chin; Su, Yen-Ning; Huang, Kuo-Kuang; Huang, Yueh-Min
2012-10-22
A growing number of educational studies apply sensors to improve student learning in real classroom settings. However, how can sensors be integrated into classrooms to help instructors find out students' reading concentration rates and thus better increase learning effectiveness? The aim of the current study was to develop a reading concentration monitoring system for use with e-books in an intelligent classroom and to help instructors find out the students' reading concentration rates. The proposed system uses three types of sensor technologies, namely a webcam, heartbeat sensor, and blood oxygen sensor to detect the learning behaviors of students by capturing various physiological signals. An artificial bee colony (ABC) optimization approach is applied to the data gathered from these sensors to help instructors understand their students' reading concentration rates in a classroom learning environment. The results show that the use of the ABC algorithm in the proposed system can effectively obtain near-optimal solutions. The system has a user-friendly graphical interface, making it easy for instructors to clearly understand the reading status of their students.
Hsu, Chia-Cheng; Chen, Hsin-Chin; Su, Yen-Ning; Huang, Kuo-Kuang; Huang, Yueh-Min
2012-01-01
A growing number of educational studies apply sensors to improve student learning in real classroom settings. However, how can sensors be integrated into classrooms to help instructors find out students' reading concentration rates and thus better increase learning effectiveness? The aim of the current study was to develop a reading concentration monitoring system for use with e-books in an intelligent classroom and to help instructors find out the students' reading concentration rates. The proposed system uses three types of sensor technologies, namely a webcam, heartbeat sensor, and blood oxygen sensor to detect the learning behaviors of students by capturing various physiological signals. An artificial bee colony (ABC) optimization approach is applied to the data gathered from these sensors to help instructors understand their students' reading concentration rates in a classroom learning environment. The results show that the use of the ABC algorithm in the proposed system can effectively obtain near-optimal solutions. The system has a user-friendly graphical interface, making it easy for instructors to clearly understand the reading status of their students. PMID:23202042
The Design of Artificial Intelligence Robot Based on Fuzzy Logic Controller Algorithm
NASA Astrophysics Data System (ADS)
Zuhrie, M. S.; Munoto; Hariadi, E.; Muslim, S.
2018-04-01
Artificial Intelligence Robot is a wheeled robot driven by a DC motor that moves along the wall using an ultrasonic sensor as a detector of obstacles. This study uses ultrasonic sensors HC-SR04 to measure the distance between the robot with the wall based ultrasonic wave. This robot uses Fuzzy Logic Controller to adjust the speed of DC motor. When the ultrasonic sensor detects a certain distance, sensor data is processed on ATmega8 then the data goes to ATmega16. From ATmega16, sensor data is calculated based on Fuzzy rules to drive DC motor speed. The program used to adjust the speed of a DC motor is CVAVR program (Code Vision AVR). The readable distance of ultrasonic sensor is 3 cm to 250 cm with response time 0.5 s. Testing of robots on walls with a setpoint value of 9 cm to 10 cm produce an average error value of -12% on the wall of L, -8% on T walls, -8% on U wall, and -1% in square wall.
Recent progress of flexible and wearable strain sensors for human-motion monitoring
NASA Astrophysics Data System (ADS)
Ge, Gang; Huang, Wei; Shao, Jinjun; Dong, Xiaochen
2018-01-01
With the rapid development of human artificial intelligence and the inevitably expanding markets, the past two decades have witnessed an urgent demand for the flexible and wearable devices, especially the flexible strain sensors. Flexible strain sensors, incorporated the merits of stretchability, high sensitivity and skin-mountable, are emerging as an extremely charming domain in virtue of their promising applications in artificial intelligent realms, human-machine systems and health-care devices. In this review, we concentrate on the transduction mechanisms, building blocks of flexible physical sensors, subsequently property optimization in terms of device structures and sensing materials in the direction of practical applications. Perspectives on the existing challenges are also highlighted in the end. Project supported by the NNSF of China (Nos. 61525402, 61604071), the Key University Science Research Project of Jiangsu Province (No. 15KJA430006), and the Natural Science Foundation of Jiangsu Province (No. BK20161012).
A Trajectory Generation Approach for Payload Directed Flight
NASA Technical Reports Server (NTRS)
Ippolito, Corey A.; Yeh, Yoo-Hsiu
2009-01-01
Presently, flight systems designed to perform payload-centric maneuvers require preconstructed procedures and special hand-tuned guidance modes. To enable intelligent maneuvering via strong coupling between the goals of payload-directed flight and the autopilot functions, there exists a need to rethink traditional autopilot design and function. Research into payload directed flight examines sensor and payload-centric autopilot modes, architectures, and algorithms that provide layers of intelligent guidance, navigation and control for flight vehicles to achieve mission goals related to the payload sensors, taking into account various constraints such as the performance limitations of the aircraft, target tracking and estimation, obstacle avoidance, and constraint satisfaction. Payload directed flight requires a methodology for accurate trajectory planning that lets the system anticipate expected return from a suite of onboard sensors. This paper presents an extension to the existing techniques used in the literature to quickly and accurately plan flight trajectories that predict and optimize the expected return of onboard payload sensors.
Li, Hongqiang; Yang, Haijing; Li, Enbang; Liu, Zhihui; Wei, Kejia
2012-05-21
Measuring body temperature is considerably important to physiological studies as well as clinical investigations. In recent years, numerous observations have been reported and various methods of measurement have been employed. The present paper introduces a novel wearable sensor in intelligent clothing for human body temperature measurement. The objective is the integration of optical fiber Bragg grating (FBG)-based sensors into functional textiles to extend the capabilities of wearable solutions for body temperature monitoring. In addition, the temperature sensitivity is 150 pm/°C, which is almost 15 times higher than that of a bare FBG. This study combines large and small pipes during fabrication to implant FBG sensors into the fabric. The law of energy conservation of the human body is considered in determining heat transfer between the body and its clothing. The mathematical model of heat transmission between the body and clothed FBG sensors is studied, and the steady-state thermal analysis is presented. The simulation results show the capability of the material to correct the actual body temperature. Based on the skin temperature obtained by the weighted average method, this paper presents the five points weighted coefficients model using both sides of the chest, armpits, and the upper back for the intelligent clothing. The weighted coefficients of 0.0826 for the left chest, 0.3706 for the left armpit, 0.3706 for the right armpit, 0.0936 for the upper back, and 0.0826 for the right chest were obtained using Cramer's Rule. Using the weighting coefficient, the deviation of the experimental result was ± 0.18 °C, which favors the use for clinical armpit temperature monitoring. Moreover, in special cases when several FBG sensors are broken, the weighted coefficients of the other sensors could be changed to obtain accurate body temperature.
NASA's computer science research program
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1983-01-01
Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.
NASA Tech Briefs, December 1988. Volume 12, No. 11
NASA Technical Reports Server (NTRS)
1988-01-01
This month's technical section includes forecasts for 1989 and beyond by NASA experts in the following fields: Integrated Circuits; Communications; Computational Fluid Dynamics; Ceramics; Image Processing; Sensors; Dynamic Power; Superconductivity; Artificial Intelligence; and Flow Cytometry. The quotes provide a brief overview of emerging trends, and describe inventions and innovations being developed by NASA, other government agencies, and private industry that could make a significant impact in coming years. A second bonus feature in this month's issue is the expanded subject index that begins on page 98. The index contains cross-referenced listings for all technical briefs appearing in NASA Tech Briefs during 1988.
2018-01-01
His research designs adaptive systems for online content, by integrating research in psychology and education, human- ANNEX A − INTELLIGENT TUTORING...related scientific activities that include systems engineering, operational research and analysis, synthesis, integration and validation of knowledge...System Analysis and Studies Panel • SCI Systems Concepts and Integration Panel • SET Sensors and Electronics Technology Panel These Panels and Group
Computer-Aided Sensor Development Focused on Security Issues.
Bialas, Andrzej
2016-05-26
The paper examines intelligent sensor and sensor system development according to the Common Criteria methodology, which is the basic security assurance methodology for IT products and systems. The paper presents how the development process can be supported by software tools, design patterns and knowledge engineering. The automation of this process brings cost-, quality-, and time-related advantages, because the most difficult and most laborious activities are software-supported and the design reusability is growing. The paper includes a short introduction to the Common Criteria methodology and its sensor-related applications. In the experimental section the computer-supported and patterns-based IT security development process is presented using the example of an intelligent methane detection sensor. This process is supported by an ontology-based tool for security modeling and analyses. The verified and justified models are transferred straight to the security target specification representing security requirements for the IT product. The novelty of the paper is to provide a patterns-based and computer-aided methodology for the sensors development with a view to achieving their IT security assurance. The paper summarizes the validation experiment focused on this methodology adapted for the sensors system development, and presents directions of future research.
Computer-Aided Sensor Development Focused on Security Issues
Bialas, Andrzej
2016-01-01
The paper examines intelligent sensor and sensor system development according to the Common Criteria methodology, which is the basic security assurance methodology for IT products and systems. The paper presents how the development process can be supported by software tools, design patterns and knowledge engineering. The automation of this process brings cost-, quality-, and time-related advantages, because the most difficult and most laborious activities are software-supported and the design reusability is growing. The paper includes a short introduction to the Common Criteria methodology and its sensor-related applications. In the experimental section the computer-supported and patterns-based IT security development process is presented using the example of an intelligent methane detection sensor. This process is supported by an ontology-based tool for security modeling and analyses. The verified and justified models are transferred straight to the security target specification representing security requirements for the IT product. The novelty of the paper is to provide a patterns-based and computer-aided methodology for the sensors development with a view to achieving their IT security assurance. The paper summarizes the validation experiment focused on this methodology adapted for the sensors system development, and presents directions of future research. PMID:27240360
Coding Strategies and Implementations of Compressive Sensing
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Han
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.
NASA Astrophysics Data System (ADS)
Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.
2015-09-01
Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n <10-13 m- 2/3). An intelligent correction algorithm can then be developed to reconstruct the perturbed wavefront and use this information to drive a deformable mirror capable of correcting the major distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2009-04-01
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing. One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security, battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3 requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to: computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g., centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and global data/information fusion scheme for situational awareness. Although, many models have been proposed to address one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks. In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different levels of fusion and different applications.
Kingfisher: a system for remote sensing image database management
NASA Astrophysics Data System (ADS)
Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.
2003-04-01
At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.
NASA Technical Reports Server (NTRS)
1991-01-01
Viewgraphs of briefings presented at the SSTAC/ARTS review of the draft Integrated Technology Plan (ITP) on aerothermodynamics, automation and robotics systems, sensors, and high-temperature superconductivity are included. Topics covered include: aerothermodynamics; aerobraking; aeroassist flight experiment; entry technology for probes and penetrators; automation and robotics; artificial intelligence; NASA telerobotics program; planetary rover program; science sensor technology; direct detector; submillimeter sensors; laser sensors; passive microwave sensing; active microwave sensing; sensor electronics; sensor optics; coolers and cryogenics; and high temperature superconductivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Viewgraphs of briefings presented at the SSTAC/ARTS review of the draft Integrated Technology Plan (ITP) on aerothermodynamics, automation and robotics systems, sensors, and high-temperature superconductivity are included. Topics covered include: aerothermodynamics; aerobraking; aeroassist flight experiment; entry technology for probes and penetrators; automation and robotics; artificial intelligence; NASA telerobotics program; planetary rover program; science sensor technology; direct detector; submillimeter sensors; laser sensors; passive microwave sensing; active microwave sensing; sensor electronics; sensor optics; coolers and cryogenics; and high temperature superconductivity.
Blanco, Jesús; García, Andrés; Morenas, Javier de Las
2018-06-09
Energy saving has become a major concern for the developed society of our days. This paper presents a Wireless Sensor and Actuator Network (WSAN) designed to provide support to an automatic intelligent system, based on the Internet of Things (IoT), which enables a responsible consumption of energy. The proposed overall system performs an efficient energetic management of devices, machines and processes, optimizing their operation to achieve a reduction in their overall energy usage at any given time. For this purpose, relevant data is collected from intelligent sensors, which are in-stalled at the required locations, as well as from the energy market through the Internet. This information is analysed to provide knowledge about energy utilization, and to improve efficiency. The system takes autonomous decisions automatically, based on the available information and the specific requirements in each case. The proposed system has been implanted and tested in a food factory. Results show a great optimization of energy efficiency and a substantial improvement on energy and costs savings.
The Virtual Environment for Rapid Prototyping of the Intelligent Environment
Bouzouane, Abdenour; Gaboury, Sébastien
2017-01-01
Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants’ behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs. PMID:29112175
The Virtual Environment for Rapid Prototyping of the Intelligent Environment.
Francillette, Yannick; Boucher, Eric; Bouzouane, Abdenour; Gaboury, Sébastien
2017-11-07
Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants' behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs.
A resonant electromagnetic vibration energy harvester for intelligent wireless sensor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, Jing, E-mail: jingqiu@cqu.edu.cn; Wen, Yumei; Li, Ping
Vibration energy harvesting is now receiving more interest as a means for powering intelligent wireless sensor systems. In this paper, a resonant electromagnetic vibration energy harvester (VEH) employing double cantilever to convert low-frequency vibration energy into electrical energy is presented. The VEH is made up of two cantilever beams, a coil, and magnetic circuits. The electric output performances of the proposed electromagnetic VEH have been investigated. With the enhancement of turns number N, the optimum peak power of electromagnetic VEH increases sharply and the resonance frequency deceases gradually. When the vibration acceleration is 0.5 g, we obtain the optimum output voltagemore » and power of 9.04 V and 50.8 mW at frequency of 14.9 Hz, respectively. In a word, the prototype device was successfully developed and the experimental results exhibit a great enhancement in the output power and bandwidth compared with other traditional electromagnetic VEHs. Remarkably, the proposed resonant electromagnetic VEH have great potential for applying in intelligent wireless sensor systems.« less
Development of Intelligent Spray Systems for Nursery Crop Production
USDA-ARS?s Scientific Manuscript database
Two intelligent sprayer prototypes were developed to increase pesticide application efficiency in nursery production. The first prototype was a hydraulic vertical boom system using ultrasonic sensors to detect tree size and volume for liner-sized trees and the second prototype was an air-assisted sp...
Facility Monitoring: A Qualitative Theory for Sensor Fusion
NASA Technical Reports Server (NTRS)
Figueroa, Fernando
2001-01-01
Data fusion and sensor management approaches have largely been implemented with centralized and hierarchical architectures. Numerical and statistical methods are the most common data fusion methods found in these systems. Given the proliferation and low cost of processing power, there is now an emphasis on designing distributed and decentralized systems. These systems use analytical/quantitative techniques or qualitative reasoning methods for date fusion.Based on other work by the author, a sensor may be treated as a highly autonomous (decentralized) unit. Each highly autonomous sensor (HAS) is capable of extracting qualitative behaviours from its data. For example, it detects spikes, disturbances, noise levels, off-limit excursions, step changes, drift, and other typical measured trends. In this context, this paper describes a distributed sensor fusion paradigm and theory where each sensor in the system is a HAS. Hence, given the reach qualitative information from each HAS, a paradigm and formal definitions are given so that sensors and processes can reason and make decisions at the qualitative level. This approach to sensor fusion makes it possible the implementation of intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors. This paradigm facilitates a balanced distribution of intelligence (code and/or hardware) to the sensor level, the process/system level, and a higher controller level. The primary application of interest is in intelligent health management of rocket engine test stands.
NASA Astrophysics Data System (ADS)
Korotaev, Valery V.; Denisov, Victor M.; Rodrigues, Joel J. P. C.; Serikova, Mariya G.; Timofeev, Andrey V.
2015-05-01
The paper deals with the creation of integrated monitoring systems. They combine fiber-optic classifiers and local sensor networks. These systems allow for the monitoring of complex industrial objects. Together with adjacent natural objects, they form the so-called geotechnical systems. An integrated monitoring system may include one or more spatially continuous fiber-optic classifiers based on optic fiber and one or more arrays of discrete measurement sensors, which are usually combined in sensor networks. Fiber-optic classifiers are already widely used for the control of hazardous extended objects (oil and gas pipelines, railways, high-rise buildings, etc.). To monitor local objects, discrete measurement sensors are generally used (temperature, pressure, inclinometers, strain gauges, accelerometers, sensors measuring the composition of impurities in the air, and many others). However, monitoring complex geotechnical systems require a simultaneous use of continuous spatially distributed sensors based on fiber-optic cable and connected local discrete sensors networks. In fact, we are talking about integration of the two monitoring methods. This combination provides an additional way to create intelligent monitoring systems. Modes of operation of intelligent systems can automatically adapt to changing environmental conditions. For this purpose, context data received from one sensor (e.g., optical channel) may be used to change modes of work of other sensors within the same monitoring system. This work also presents experimental results of the prototype of the integrated monitoring system.
Advanced Sensor and Packaging Technologies for Intelligent Adaptive Engine Controls (Preprint)
2013-05-01
combination of micro-electromechanical systems (MEMS) sensor technology, novel ceramic materials, high - temperature electronics, and advanced harsh...with simultaneous pressure measurements up to 1,000 psi. The combination of a high - temperature , high -pressure-ratio compressor system, and adaptive...combination of micro-electromechanical systems (MEMS) sensor technology, novel ceramic materials, high temperature electronics, and advanced harsh
Smart architecture for stable multipoint fiber Bragg grating sensor system
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Tsai, Ning; Zhuang, Yuan-Hong; Huang, Tzu-Jung; Chow, Chi-Wai; Chen, Jing-Heng; Liu, Wen-Fung
2017-12-01
In this work, we propose and investigate an intelligent fiber Bragg grating (FBG)-based sensor system in which the proposed stabilized and wavelength-tunable single-longitudinal-mode erbium-doped fiber laser can improve the sensing accuracy of wavelength-division-multiplexing multiple FBG sensors in a longer fiber transmission distance. Moreover, we also demonstrate the proposed sensor architecture to enhance the FBG capacity for sensing strain and temperature, simultaneously.
Climate Change Mitigation: Can the U.S. Intelligence Community Help?
2013-06-01
satellite sensors to establish the concentration of atmospheric CO2 parts per million (ppm mole fraction) in samples collected at multiple...measurements. Spatial sampling density, the number of sensors or—in the case of satellite imagery the number and resolution of the images—likewise influences...Somewhat paradoxically, sensor accuracy from either remote ( satellites ) or in situ sensors is an important consideration, but it must also be evaluated
Unattended Ground Sensors for Expeditionary Force 21 Intelligence Collections
2015-06-01
tamper. 55 Size: 3 ½ x 3 ½ x 1 ¾ inches. Wireless RF networked communications. Built in seismic, acoustic , magnetic, and PIR sensors ...Marine Corps VHF Very High Frequency WSN Wireless Sensor Network xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii ACKNOWLEDGMENTS I want...that allow digital wireless RF communications from each sensor interfaced into a variety of network architectures to relay critical data to a final
Wide-area littoral discreet observation: success at the tactical edge
NASA Astrophysics Data System (ADS)
Toth, Susan; Hughes, William; Ladas, Andrew
2012-06-01
In June 2011, the United States Army Research Laboratory (ARL) participated in Empire Challenge 2011 (EC-11). EC-11 was United States Joint Forces Command's (USJFCOM) annual live, joint and coalition intelligence, surveillance and reconnaissance (ISR) interoperability demonstration under the sponsorship of the Under Secretary of Defense for Intelligence (USD/I). EC-11 consisted of a series of ISR interoperability events, using a combination of modeling & simulation, laboratory and live-fly events. Wide-area Littoral Discreet Observation (WALDO) was ARL's maritime/littoral capability. WALDO met a USD(I) directive that EC-11 have a maritime component and WALDO was the primary player in the maritime scenario conducted at Camp Lejeune, North Carolina. The WALDO effort demonstrated the utility of a networked layered sensor array deployed in a maritime littoral environment, focusing on maritime surveillance targeting counter-drug, counter-piracy and suspect activity in a littoral or riverine environment. In addition to an embedded analytical capability, the sensor array and control infrastructure consisted of the Oriole acoustic sensor, iScout unattended ground sensor (UGS), OmniSense UGS, the Compact Radar and the Universal Distributed Management System (UDMS), which included the Proxy Skyraider, an optionally manned aircraft mounting both wide and narrow FOV EO/IR imaging sensors. The capability seeded a littoral area with riverine and unattended sensors in order to demonstrate the utility of a Wide Area Sensor (WAS) capability in a littoral environment focused on maritime surveillance activities. The sensors provided a cue for WAS placement/orbit. A narrow field of view sensor would be used to focus on more discreet activities within the WAS footprint. Additionally, the capability experimented with novel WAS orbits to determine if there are more optimal orbits for WAS collection in a littoral environment. The demonstration objectives for WALDO at EC-11 were: * Demonstrate a networked, layered, multi-modal sensor array deployed in a maritime littoral environment, focusing on maritime surveillance targeting counter-drug, counter-piracy and suspect activity * Assess the utility of a Wide Area Surveillance (WAS) sensor in a littoral environment focused on maritime surveillance activities * Demonstrate the effectiveness of using UGS sensors to cue WAS sensor tasking * Employ a narrow field of view full motion video (FMV) sensor package that is collocated with the WAS to conduct more discrete observation of potential items of interest when queued by near-real-time data from UGS or observers * Couple the ARL Oriole sensor with other modality UGS networks in a ground layer ISR capability, and incorporate data collected from aerial sensors with a GEOINT base layer to form a fused product * Swarm multiple aerial or naval platforms to prosecute single or multiple targets * Track fast moving surface vessels in littoral areas * Disseminate time sensitive, high value data to the users at the tactical edge In short we sought to answer the following question: how do you layer, control and display disparate sensors and sensor modalities in such a way as to facilitate appropriate sensor cross-cue, data integration, and analyst control to effectively monitor activity in a littoral (or novel) environment?
Intelligent robot trends and predictions for the .net future
NASA Astrophysics Data System (ADS)
Hall, Ernest L.
2001-10-01
An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent and future technical and economic trends. During the past twenty years the use of industrial robots that are equipped not only with precise motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. Intelligent robot products have been developed in many cases for factory automation and for some hospital and home applications. To reach an even higher degree of applications, the addition of learning may be required. Recently, learning theories such as the adaptive critic have been proposed. In this type of learning, a critic provides a grade to the controller of an action module such as a robot. The adaptive critic is a good model for human learning. In general, the critic may be considered to be the human with the teach pendant, plant manager, line supervisor, quality inspector or the consumer. If the ultimate critic is the consumer, then the quality inspector must model the consumer's decision-making process and use this model in the design and manufacturing operations. Can the adaptive critic be used to advance intelligent robots? Intelligent robots have historically taken decades to be developed and reduced to practice. Methods for speeding this development include technology such as rapid prototyping and product development and government, industry and university cooperation.
Tactile sensor of hardness recognition based on magnetic anomaly detection
NASA Astrophysics Data System (ADS)
Xue, Lingyun; Zhang, Dongfang; Chen, Qingguang; Rao, Huanle; Xu, Ping
2018-03-01
Hardness, as one kind of tactile sensing, plays an important role in the field of intelligent robot application such as gripping, agricultural harvesting, prosthetic hand and so on. Recently, with the rapid development of magnetic field sensing technology with high performance, a number of magnetic sensors have been developed for intelligent application. The tunnel Magnetoresistance(TMR) based on magnetoresistance principal works as the sensitive element to detect the magnetic field and it has proven its excellent ability of weak magnetic detection. In the paper, a new method based on magnetic anomaly detection was proposed to detect the hardness in the tactile way. The sensor is composed of elastic body, ferrous probe, TMR element, permanent magnet. When the elastic body embedded with ferrous probe touches the object under the certain size of force, deformation of elastic body will produce. Correspondingly, the ferrous probe will be forced to displace and the background magnetic field will be distorted. The distorted magnetic field was detected by TMR elements and the output signal at different time can be sampled. The slope of magnetic signal with the sampling time is different for object with different hardness. The result indicated that the magnetic anomaly sensor can recognize the hardness rapidly within 150ms after the tactile moment. The hardness sensor based on magnetic anomaly detection principal proposed in the paper has the advantages of simple structure, low cost, rapid response and it has shown great application potential in the field of intelligent robot.
Pervasive community care platform: Ambient Intelligence leveraging sensor networks and mobile agents
NASA Astrophysics Data System (ADS)
Su, Chuan-Jun; Chiang, Chang-Yu
2014-04-01
Several powerful trends are contributing to an aging of much of the world's population, especially in economically developed countries. To mitigate the negative effects of rapidly ageing populations, societies must act early to plan for the welfare, medical care and residential arrangements of their senior citizens, and for the manpower and associated training needed to execute these plans. This paper describes the development of an Ambient Intelligent Community Care Platform (AICCP), which creates an environment of Ambient Intelligence through the use of sensor network and mobile agent (MA) technologies. The AICCP allows caregivers to quickly and accurately locate their charges; access, update and share critical treatment and wellness data; and automatically archive all records. The AICCP presented in this paper is expected to enable caregivers and communities to offer pervasive, accurate and context-aware care services.
Balancing Information Analysis and Decision Value: A Model to Exploit the Decision Process
2011-12-01
technical intelli- gence e.g. signals and sensors (SIGINT and MASINT), imagery (!MINT), as well and human and open source intelligence (HUMINT and OSINT ...Clark 2006). The ability to capture large amounts of da- ta and the plenitude of modem intelligence information sources provides a rich cache of...many tech- niques for managing information collected and derived from these sources , the exploitation of intelligence assets for decision-making
Biomimetics in Intelligent Sensor and Actuator Automation Systems
NASA Astrophysics Data System (ADS)
Bruckner, Dietmar; Dietrich, Dietmar; Zucker, Gerhard; Müller, Brit
Intelligent machines are really an old mankind's dream. With increasing technological development, the requirements for intelligent devices also increased. However, up to know, artificial intelligence (AI) lacks solutions to the demands of truly intelligent machines that have no problems to integrate themselves into daily human environments. Current hardware with a processing power of billions of operations per second (but without any model of human-like intelligence) could not substantially contribute to the intelligence of machines when compared with that of the early AI times. There are great results, of course. Machines are able to find the shortest path between far apart cities on the map; algorithms let you find information described only by few key words. But no machine is able to get us a cup of coffee from the kitchen yet.
Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin
2014-07-02
Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.
Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin
2014-01-01
Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942
Applications based on restored satellite images
NASA Astrophysics Data System (ADS)
Arbel, D.; Levin, S.; Nir, M.; Bhasteker, I.
2005-08-01
Satellites orbit the earth and obtain imagery of the ground below. The quality of satellite images is affected by the properties of the atmospheric imaging path, which degrade the image by blurring it and reducing its contrast. Applications involving satellite images are many and varied. Imaging systems are also different technologically and in their physical and optical characteristics such as sensor types, resolution, field of view (FOV), spectral range of the acquiring channels - from the visible to the thermal IR (TIR), platforms (mobilization facilities; aircrafts and/or spacecrafts), altitude above ground surface etc. It is important to obtain good quality satellite images because of the variety of applications based on them. The more qualitative is the recorded image, the more information is yielded from the image. The restoration process is conditioned by gathering much data about the atmospheric medium and its characterization. In return, there is a contribution to the applications based on those restorations i.e., satellite communication, warfare against long distance missiles, geographical aspects, agricultural aspects, economical aspects, intelligence, security, military, etc. Several manners to use restored Landsat 7 enhanced thematic mapper plus (ETM+) satellite images are suggested and presented here. In particular, using the restoration results for few potential geographical applications such as color classification and mapping (roads and streets localization) methods.
NASA Astrophysics Data System (ADS)
Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.
2002-10-01
In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.
Coarse Resolution SAR Imagery to Support Flood Inundation Models in Near Real Time
NASA Astrophysics Data System (ADS)
Di Baldassarre, Giuliano; Schumann, Guy; Brandimarte, Luigia; Bates, Paul
2009-11-01
In recent years, the availability of new emerging data (e.g. remote sensing, intelligent wireless sensors, etc) has led to a sudden shift from a data-sparse to a data-rich environment for hydrological and hydraulic modelling. Furthermore, the increased socioeconomic relevance of river flood studies has motivated the development of complex methodologies for the simulation of the hydraulic behaviour of river systems. In this context, this study aims at assessing the capability of coarse resolution SAR (Synthetic Aperture Radar) imagery to support and quickly validate flood inundation models in near real time. A hydraulic model of a 98km reach of the River Po (Italy), previously calibrated on a high-magnitude flood event with extensive and high quality field data, is tested using a SAR flood image, acquired and processed in near real time, during the June 2008 low-magnitude event. Specifically, the image is an acquisition by the ENVISAT-ASAR sensor in wide swath mode and has been provided through ESA (European Space Agency) Fast Registration system at no cost 24 hours after the acquisition. The study shows that the SAR image enables validation and improvement of the model in a time shorter than the flood travel time. This increases the reliability of model predictions (e.g. water elevation and inundation width along the river reach) and, consequently, assists flood management authorities in undertaking the necessary prevention activities.
Best-next-view algorithm for three-dimensional scene reconstruction using range images
NASA Astrophysics Data System (ADS)
Banta, J. E.; Zhien, Yu; Wang, X. Z.; Zhang, G.; Smith, M. T.; Abidi, Mongi A.
1995-10-01
The primary focus of the research detailed in this paper is to develop an intelligent sensing module capable of automatically determining the optimal next sensor position and orientation during scene reconstruction. To facilitate a solution to this problem, we have assembled a system for reconstructing a 3D model of an object or scene from a sequence of range images. Candidates for the best-next-view position are determined by detecting and measuring occlusions to the range camera's view in an image. Ultimately, the candidate which will reveal the greatest amount of unknown scene information is selected as the best-next-view position. Our algorithm uses ray tracing to determine how much new information a given sensor perspective will reveal. We have tested our algorithm successfully on several synthetic range data streams, and found the system's results to be consistent with an intuitive human search. The models recovered by our system from range data compared well with the ideal models. Essentially, we have proven that range information of physical objects can be employed to automatically reconstruct a satisfactory dynamic 3D computer model at a minimal computational expense. This has obvious implications in the contexts of robot navigation, manufacturing, and hazardous materials handling. The algorithm we developed takes advantage of no a priori information in finding the best-next-view position.
NASA Technical Reports Server (NTRS)
Lansaw, John; Schmalzel, John; Figueroa, Jorge
2009-01-01
John C. Stennis Space Center (SSC) provides rocket engine propulsion testing for NASA's space programs. Since the development of the Space Shuttle, every Space Shuttle Main Engine (SSME) has undergone acceptance testing at SSC before going to Kennedy Space Center (KSC) for integration into the Space Shuttle. The SSME is a large cryogenic rocket engine that uses Liquid Hydrogen (LH2) as the fuel. As NASA moves to the new ARES V launch system, the main engines on the new vehicle, as well as the upper stage engine, are currently base lined to be cryogenic rocket engines that will also use LH2. The main rocket engines for the ARES V will be larger than the SSME, while the upper stage engine will be approximately half that size. As a result, significant quantities of hydrogen will be required during the development, testing, and operation of these rocket engines.Better approaches are needed to simplify sensor integration and help reduce life-cycle costs. 1.Smarter sensors. Sensor integration should be a matter of "plug-and-play" making sensors easier to add to a system. Sensors that implement new standards can help address this problem; for example, IEEE STD 1451.4 defines transducer electronic data sheet (TEDS) templates for commonly used sensors such as bridge elements and thermocouples. When a 1451.4 compliant smart sensor is connected to a system that can read the TEDS memory, all information needed to configure the data acquisition system can be uploaded. This reduces the amount of labor required and helps minimize configuration errors. 2.Intelligent sensors. Data received from a sensor be scaled, linearized; and converted to engineering units. Methods to reduce sensor processing overhead at the application node are needed. Smart sensors using low-cost microprocessors with integral data acquisition and communication support offer the means to add these capabilities. Once a processor is embedded, other features can be added; for example, intelligent sensors can make a health assessment to inform the data acquisition client when sensor performance is suspect. 3.Distributed sample synchronization. Networks of sensors require new ways for synchronizing samples. Standards that address the distributed timing problem (for example, IEEE STD 1588) provide the means to aggregate samples from many distributed smart sensors with sub-microsecond accuracy. 4. Reduction in interconnect. Alternative means are needed to reduce the frequent problems associated with cabling and connectors. Wireless technologies offer the promise of reducing interconnects and simultaneously making it easy to quickly add a sensor to a system.
Real-Time Mapping: Contemporary Challenges and the Internet of Things as the Way Forward
NASA Astrophysics Data System (ADS)
Bęcek, Kazimierz
2016-12-01
The Internet of Things (IoT) is an emerging technology that was conceived in 1999. The key components of the IoT are intelligent sensors, which represent objects of interest. The adjective `intelligent' is used here in the information gathering sense, not the psychological sense. Some 30 billion sensors that `know' the current status of objects they represent are already connected to the Internet. Various studies indicate that the number of installed sensors will reach 212 billion by 2020. Various scenarios of IoT projects show sensors being able to exchange data with the network as well as between themselves. In this contribution, we discuss the possibility of deploying the IoT in cartography for real-time mapping. A real-time map is prepared using data harvested through querying sensors representing geographical objects, and the concept of a virtual sensor for abstract objects, such as a land parcel, is presented. A virtual sensor may exist as a data record in the cloud. Sensors are identified by an Internet Protocol address (IP address), which implies that geographical objects through their sensors would also have an IP address. This contribution is an updated version of a conference paper presented by the author during the International Federation of Surveyors 2014 Congress in Kuala Lumpur. The author hopes that the use of the IoT for real-time mapping will be considered by the mapmaking community.
IVHM Framework for Intelligent Integration for Vehicle Health Management
NASA Technical Reports Server (NTRS)
Paris, Deidre; Trevino, Luis C.; Watson, Michael D.
2005-01-01
Integrated Vehicle Health Management (IVHM) systems for aerospace vehicles, is the process of assessing, preserving, and restoring system functionality across flight and techniques with sensor and communication technologies for spacecraft that can generate responses through detection, diagnosis, reasoning, and adapt to system faults in support of Integrated Intelligent Vehicle Management (IIVM). These real-time responses allow the IIVM to modify the affected vehicle subsystem(s) prior to a catastrophic event. Furthermore, this framework integrates technologies which can provide a continuous, intelligent, and adaptive health state of a vehicle and use this information to improve safety and reduce costs of operations. Recent investments in avionics, health management, and controls have been directed towards IIVM. As this concept has matured, it has become clear that IIVM requires the same sensors and processing capabilities as the real-time avionics functions to support diagnosis of subsystem problems. New sensors have been proposed, in addition to augment the avionics sensors to support better system monitoring and diagnostics. As the designs have been considered, a synergy has been realized where the real-time avionics can utilize sensors proposed for diagnostics and prognostics to make better real-time decisions in response to detected failures. IIVM provides for a single system allowing modularity of functions and hardware across the vehicle. The framework that supports IIVM consists of 11 major on-board functions necessary to fully manage a space vehicle maintaining crew safety and mission objectives. These systems include the following: Guidance and Navigation; Communications and Tracking; Vehicle Monitoring; Information Transport and Integration; Vehicle Diagnostics; Vehicle Prognostics; Vehicle Mission Planning, Automated Repair and Replacement; Vehicle Control; Human Computer Interface; and Onboard Verification and Validation. Furthermore, the presented framework provides complete vehicle management which not only allows for increased crew safety and mission success through new intelligence capabilities, but also yields a mechanism for more efficient vehicle operations.
Comparison of radar and infrared distance sensors for intelligent cruise control systems
NASA Astrophysics Data System (ADS)
Hoess, Alfred; Hosp, Werner; Rauner, Hans
1995-09-01
In this paper, infrared distance sensors are compared regarding technology, environmental, and practical aspects. Different methods for obtaining lateral resolution and covering the required detection range are presented for both sensor technologies. Possible positions for sensor installation at the test vehicle have been tested. Experimental results regarding cleaning devices and other environmental problems are presented. Finally, future aspects, e.g. speed over ground measurements or technological steps are discussed.
Image-based spectroscopy for environmental monitoring
NASA Astrophysics Data System (ADS)
Bachmakov, Eduard; Molina, Carolyn; Wynne, Rosalind
2014-03-01
An image-processing algorithm for use with a nano-featured spectrometer chemical agent detection configuration is presented. The spectrometer chip acquired from Nano-Optic DevicesTM can reduce the size of the spectrometer down to a coin. The nanospectrometer chip was aligned with a 635nm laser source, objective lenses, and a CCD camera. The images from a nanospectrometer chip were collected and compared to reference spectra. Random background noise contributions were isolated and removed from the diffraction pattern image analysis via a threshold filter. Results are provided for the image-based detection of the diffraction pattern produced by the nanospectrometer. The featured PCF spectrometer has the potential to measure optical absorption spectra in order to detect trace amounts of contaminants. MATLAB tools allow for implementation of intelligent, automatic detection of the relevant sub-patterns in the diffraction patterns and subsequent extraction of the parameters using region-detection algorithms such as the generalized Hough transform, which detects specific shapes within the image. This transform is a method for detecting curves by exploiting the duality between points on a curve and parameters of that curve. By employing this imageprocessing technique, future sensor systems will benefit from new applications such as unsupervised environmental monitoring of air or water quality.
Enhanced tactical radar correlator (ETRAC): true interoperability of the 1990s
NASA Astrophysics Data System (ADS)
Guillen, Frank J.
1994-10-01
The enhanced tactical radar correlator (ETRAC) system is under development at Westinghouse Electric Corporation for the Army Space Program Office (ASPO). ETRAC is a real-time synthetic aperture radar (SAR) processing system that provides tactical IMINT to the corps commander. It features an open architecture comprised of ruggedized commercial-off-the-shelf (COTS), UNIX based workstations and processors. The architecture features the DoD common SAR processor (CSP), a multisensor computing platform to accommodate a variety of current and future imaging needs. ETRAC's principal functions include: (1) Mission planning and control -- ETRAC provides mission planning and control for the U-2R and ASARS-2 sensor, including capability for auto replanning, retasking, and immediate spot. (2) Image formation -- the image formation processor (IFP) provides the CPU intensive processing capability to produce real-time imagery for all ASARS imaging modes of operation. (3) Image exploitation -- two exploitation workstations are provided for first-phase image exploitation, manipulation, and annotation. Products include INTEL reports, annotated NITF SID imagery, high resolution hard copy prints and targeting data. ETRAC is transportable via two C-130 aircraft, with autonomous drive on/off capability for high mobility. Other autonomous capabilities include rapid setup/tear down, extended stand-alone support, internal environmental control units (ECUs) and power generation. ETRAC's mission is to provide the Army field commander with accurate, reliable, and timely imagery intelligence derived from collections made by the ASARS-2 sensor, located on-board the U-2R aircraft. To accomplish this mission, ETRAC receives video phase history (VPH) directly from the U-2R aircraft and converts it in real time into soft copy imagery for immediate exploitation and dissemination to the tactical users.
Zhang, Jun; Tian, Gui Yun; Marindra, Adi M J; Sunny, Ali Imam; Zhao, Ao Bo
2017-01-29
In recent few years, the antenna and sensor communities have witnessed a considerable integration of radio frequency identification (RFID) tag antennas and sensors because of the impetus provided by internet of things (IoT) and cyber-physical systems (CPS). Such types of sensor can find potential applications in structural health monitoring (SHM) because of their passive, wireless, simple, compact size, and multimodal nature, particular in large scale infrastructures during their lifecycle. The big data from these ubiquitous sensors are expected to generate a big impact for intelligent monitoring. A remarkable number of scientific papers demonstrate the possibility that objects can be remotely tracked and intelligently monitored for their physical/chemical/mechanical properties and environment conditions. Most of the work focuses on antenna design, and significant information has been generated to demonstrate feasibilities. Further information is needed to gain deep understanding of the passive RFID antenna sensor systems in order to make them reliable and practical. Nevertheless, this information is scattered over much literature. This paper is to comprehensively summarize and clearly highlight the challenges and state-of-the-art methods of passive RFID antenna sensors and systems in terms of sensing and communication from system point of view. Future trends are also discussed. The future research and development in UK are suggested as well.
Intelligent data processing of an ultrasonic sensor system for pattern recognition improvements
NASA Astrophysics Data System (ADS)
Na, Seung You; Park, Min-Sang; Hwang, Won-Gul; Kee, Chang-Doo
1999-05-01
Though conventional time-of-flight ultrasonic sensor systems are popular due to the advantages of low cost and simplicity, the usage of the sensors is rather narrowly restricted within object detection and distance readings. There is a strong need to enlarge the amount of environmental information for mobile applications to provide intelligent autonomy. Wide sectors of such neighboring object recognition problems can be satisfactorily handled with coarse vision data such as sonar maps instead of accurate laser or optic measurements. For the usage of object pattern recognition, ultrasonic senors have inherent shortcomings of poor directionality and specularity which result in low spatial resolution and indistinctiveness of object patterns. To resolve these problems an array of increased number of sensor elements has been used for large objects. In this paper we propose a method of sensor array system with improved recognition capability using electronic circuits accompanying the sensor array and neuro-fuzzy processing of data fusion. The circuit changes transmitter output voltages of array elements in several steps. Relying upon the known sensor characteristics, a set of different return signals from neighboring senors is manipulated to provide an enhanced pattern recognition in the aspects of inclination angle, size and shift as well as distance of objects. The results show improved resolution of the measurements for smaller targets.
Networked sensors for the combat forces
NASA Astrophysics Data System (ADS)
Klager, Gene
2004-11-01
Real-time and detailed information is critical to the success of ground combat forces. Current manned reconnaissance, surveillance, and target acquisition (RSTA) capabilities are not sufficient to cover battlefield intelligence gaps, provide Beyond-Line-of-Sight (BLOS) targeting, and the ambush avoidance information necessary for combat forces operating in hostile situations, complex terrain, and conducting military operations in urban terrain. This paper describes a current US Army program developing advanced networked unmanned/unattended sensor systems to survey these gaps and provide the Commander with real-time, pertinent information. Networked Sensors for the Combat Forces plans to develop and demonstrate a new generation of low cost distributed unmanned sensor systems organic to the RSTA Element. Networked unmanned sensors will provide remote monitoring of gaps, will increase a unit"s area of coverage, and will provide the commander organic assets to complete his Battlefield Situational Awareness (BSA) picture for direct and indirect fire weapons, early warning, and threat avoidance. Current efforts include developing sensor packages for unmanned ground vehicles, small unmanned aerial vehicles, and unattended ground sensors using advanced sensor technologies. These sensors will be integrated with robust networked communications and Battle Command tools for mission planning, intelligence "reachback", and sensor data management. The network architecture design is based on a model that identifies a three-part modular design: 1) standardized sensor message protocols, 2) Sensor Data Management, and 3) Service Oriented Architecture. This simple model provides maximum flexibility for data exchange, information management and distribution. Products include: Sensor suites optimized for unmanned platforms, stationary and mobile versions of the Sensor Data Management Center, Battle Command planning tools, networked communications, and sensor management software. Details of these products and recent test results will be presented.
Recce NG: from Recce sensor to image intelligence (IMINT)
NASA Astrophysics Data System (ADS)
Larroque, Serge
2001-12-01
Recce NG (Reconnaissance New Generation) is presented as a complete and optimized Tactical Reconnaissance System. Based on a new generation Pod integrating high resolution Dual Band sensors, the system has been designed with the operational lessons learnt from the last Peace Keeping Operations in Bosnia and Kosovo. The technical solutions retained as component modules of a full IMINT acquisition system, take benefit of the state of art in the following key technologies: Advanced Mission Planning System for long range stand-off Manned Recce, Aircraft and/or Pod tasking, operating sophisticated back-up software tools, high resolution 3D geo data and improved/combat proven MMI to reduce planning delays, Mature Dual Band sensors technology to achieve the Day and Night Recce Mission, including advanced automatic operational functions, as azimuth and roll tracking capabilities, low risk in Pod integration and in carrier avionics, controls and displays upgrades, to save time in operational turn over and maintenance, High rate Imagery Down Link, for Real Time or Near Real Time transmission, fully compatible with STANAG 7085 requirements, Advanced IMINT Exploitation Ground Segment, combat proven, NATO interoperable (STANAG 7023), integrating high value software tools for accurate location, improved radiometric image processing and open link to the C4ISR systems. The choice of an industrial Prime contractor mastering across the full system, all the prior listed key products and technologies, is mandatory to a successful delivery in terms of low Cost, Risk and Time Schedule.
Multimodal Interaction in Ambient Intelligence Environments Using Speech, Localization and Robotics
ERIC Educational Resources Information Center
Galatas, Georgios
2013-01-01
An Ambient Intelligence Environment is meant to sense and respond to the presence of people, using its embedded technology. In order to effectively sense the activities and intentions of its inhabitants, such an environment needs to utilize information captured from multiple sensors and modalities. By doing so, the interaction becomes more natural…
Enhanced intelligence through optimized TCPED concepts for airborne ISR
NASA Astrophysics Data System (ADS)
Spitzer, M.; Kappes, E.; Böker, D.
2012-06-01
Current multinational operations show an increased demand for high quality actionable intelligence for different operational levels and users. In order to achieve sufficient availability, quality and reliability of information, various ISR assets are orchestrated within operational theatres. Especially airborne Intelligence, Surveillance and Reconnaissance (ISR) assets provide - due to their endurance, non-intrusiveness, robustness, wide spectrum of sensors and flexibility to mission changes - significant intelligence coverage of areas of interest. An efficient and balanced utilization of airborne ISR assets calls for advanced concepts for the entire ISR process framework including the Tasking, Collection, Processing, Exploitation and Dissemination (TCPED). Beyond this, the employment of current visualization concepts, shared information bases and information customer profiles, as well as an adequate combination of ISR sensors with different information age and dynamic (online) retasking process elements provides the optimization of interlinked TCPED processes towards higher process robustness, shorter process duration, more flexibility between ISR missions and, finally, adequate "entry points" for information requirements by operational users and commands. In addition, relevant Trade-offs of distributed and dynamic TCPED processes are examined and future trends are depicted.
Local search for optimal global map generation using mid-decadal landsat images
Khatib, L.; Gasch, J.; Morris, Robert; Covington, S.
2007-01-01
NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
CMOS Image Sensor with a Built-in Lane Detector.
Hsiao, Pei-Yung; Cheng, Hsien-Chein; Huang, Shih-Shinh; Fu, Li-Chen
2009-01-01
This work develops a new current-mode mixed signal Complementary Metal-Oxide-Semiconductor (CMOS) imager, which can capture images and simultaneously produce vehicle lane maps. The adopted lane detection algorithm, which was modified to be compatible with hardware requirements, can achieve a high recognition rate of up to approximately 96% under various weather conditions. Instead of a Personal Computer (PC) based system or embedded platform system equipped with expensive high performance chip of Reduced Instruction Set Computer (RISC) or Digital Signal Processor (DSP), the proposed imager, without extra Analog to Digital Converter (ADC) circuits to transform signals, is a compact, lower cost key-component chip. It is also an innovative component device that can be integrated into intelligent automotive lane departure systems. The chip size is 2,191.4 × 2,389.8 μm, and the package uses 40 pin Dual-In-Package (DIP). The pixel cell size is 18.45 × 21.8 μm and the core size of photodiode is 12.45 × 9.6 μm; the resulting fill factor is 29.7%.
The implementation of intelligent home controller
NASA Astrophysics Data System (ADS)
Li, Biqing; Li, Zhao
2018-04-01
This paper mainly talks about the working way of smart home terminal controller and the design of hardware and software. Controlling the lights and by simulating the lamp and the test of the curtain, destroy the light of lamp ON-OFF and the curtain's UP-DOWN by simulating the lamp and the test of the cuetain. Through the sensor collects the ambient information and sends to the network, such as light, temperature and humidity. Besides, it can realise the control of intelligent home control by PCS. Terminal controller of intelligent home which is based on ZiBee technology has into the intelligent home system, it provides people with convenient, safe and intelligent household experience.
Lei, Zhouyue; Wang, Quankang; Sun, Shengtong; Zhu, Wencheng; Wu, Peiyi
2017-06-01
In the past two decades, artificial skin-like materials have received increasing research interests for their broad applications in artificial intelligence, wearable devices, and soft robotics. However, profound challenges remain in terms of imitating human skin because of its unique combination of mechanical and sensory properties. In this work, a bioinspired mineral hydrogel is developed to fabricate a novel type of mechanically adaptable ionic skin sensor. Due to its unique viscoelastic properties, the hydrogel-based capacitive sensor is compliant, self-healable, and can sense subtle pressure changes, such as a gentle finger touch, human motion, or even small water droplets. It might not only show great potential in applications such as artificial intelligence, human/machine interactions, personal healthcare, and wearable devices, but also promote the development of next-generation mechanically adaptable intelligent skin-like devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An Informationally Structured Room for Robotic Assistance †
Tsuji, Tokuo; Mozos, Oscar Martinez; Chae, Hyunuk; Pyo, Yoonseok; Kusaka, Kazuya; Hasegawa, Tsutomu; Morooka, Ken'ichi; Kurazume, Ryo
2015-01-01
The application of assistive technologies for elderly people is one of the most promising and interesting scenarios for intelligent technologies in the present and near future. Moreover, the improvement of the quality of life for the elderly is one of the first priorities in modern countries and societies. In this work, we present an informationally structured room that is aimed at supporting the daily life activities of elderly people. This room integrates different sensor modalities in a natural and non-invasive way inside the environment. The information gathered by the sensors is processed and sent to a centralized management system, which makes it available to a service robot assisting the people. One important restriction of our intelligent room is reducing as much as possible any interference with daily activities. Finally, this paper presents several experiments and situations using our intelligent environment in cooperation with our service robot. PMID:25912347
Smart Sensors for Launch Vehicles
NASA Astrophysics Data System (ADS)
Ray, Sabooj; Mathews, Sheeja; Abraham, Sheena; Pradeep, N.; Vinod, P.
2017-12-01
Smart Sensors bring a paradigm shift in the data acquisition mechanism adopted for launch vehicle telemetry system. The sensors integrate signal conditioners, digitizers and communication systems to give digital output from the measurement location. Multiple sensors communicate with a centralized node over a common digital data bus. An in-built microcontroller gives the sensor embedded intelligence to carry out corrective action for sensor inaccuracies. A smart pressure sensor has been realized and flight-proven to increase the reliability as well as simplicity in integration so as to obtain improved data output. Miniaturization is achieved by innovative packaging. This work discusses the construction, working and flight performance of such a sensor.
A synthetic genetic edge detection program.
Tabor, Jeffrey J; Salis, Howard M; Simpson, Zachary Booth; Chevalier, Aaron A; Levskaya, Anselm; Marcotte, Edward M; Voigt, Christopher A; Ellington, Andrew D
2009-06-26
Edge detection is a signal processing algorithm common in artificial intelligence and image recognition programs. We have constructed a genetically encoded edge detection algorithm that programs an isogenic community of E. coli to sense an image of light, communicate to identify the light-dark edges, and visually present the result of the computation. The algorithm is implemented using multiple genetic circuits. An engineered light sensor enables cells to distinguish between light and dark regions. In the dark, cells produce a diffusible chemical signal that diffuses into light regions. Genetic logic gates are used so that only cells that sense light and the diffusible signal produce a positive output. A mathematical model constructed from first principles and parameterized with experimental measurements of the component circuits predicts the performance of the complete program. Quantitatively accurate models will facilitate the engineering of more complex biological behaviors and inform bottom-up studies of natural genetic regulatory networks.
A Synthetic Genetic Edge Detection Program
Tabor, Jeffrey J.; Salis, Howard; Simpson, Zachary B.; Chevalier, Aaron A.; Levskaya, Anselm; Marcotte, Edward M.; Voigt, Christopher A.; Ellington, Andrew D.
2009-01-01
Summary Edge detection is a signal processing algorithm common in artificial intelligence and image recognition programs. We have constructed a genetically encoded edge detection algorithm that programs an isogenic community of E.coli to sense an image of light, communicate to identify the light-dark edges, and visually present the result of the computation. The algorithm is implemented using multiple genetic circuits. An engineered light sensor enables cells to distinguish between light and dark regions. In the dark, cells produce a diffusible chemical signal that diffuses into light regions. Genetic logic gates are used so that only cells that sense light and the diffusible signal produce a positive output. A mathematical model constructed from first principles and parameterized with experimental measurements of the component circuits predicts the performance of the complete program. Quantitatively accurate models will facilitate the engineering of more complex biological behaviors and inform bottom-up studies of natural genetic regulatory networks. PMID:19563759
Research on the Wireless Sensor Networks Applied in the Battlefield Situation Awareness System
NASA Astrophysics Data System (ADS)
Hua, Guan; Li, Yan-Xiao; Yan, Xiao-Mei
In the modern warfare information is the crucial key of winning. Battlefield situation awareness contributes to grasping and retaining the intelligence predominance. Due to its own special characteristics Wireless Sensor Networks (WSN) have been widely used to realize reconnaissance and surveillance in the joint operations and provide simultaneous, comprehensive, accurate data to multiechelon commanders and the combatant personnel for decision making and rapid response. Military sensors have drawn great attention in the ongoing projects which have satisfied the initial design or research purpose. As the interface of the "Internet of Things" which will have an eye on every corner of the battlespace WSNs play the necessary role in the incorporated situation awareness system. WSNs, radar, infrared ray or other means work together to acquire awareness intelligence for the deployed functional units to enhance the fighting effect.
The life and death of ATR/sensor fusion and the hope for resurrection
NASA Astrophysics Data System (ADS)
Rogers, Steven K.; Sadowski, Charles; Bauer, Kenneth W.; Oxley, Mark E.; Kabrisky, Matthew; Rogers, Adam; Mott, Stephen D.
2008-04-01
For over half a century, scientists and engineers have worked diligently to advance computational intelligence. One application of interest is how computational intelligence can bring value to our war fighters. Automatic Target Recognition (ATR) and sensor fusion efforts have fallen far short of the desired capabilities. In this article we review the capabilities requested by war fighters. When compared to our current capabilities, it is easy to conclude current Combat Identification (CID) as a Family of Systems (FoS) does a lousy job. The war fighter needed capable, operationalized ATR and sensor fusion systems ten years ago but it did not happen. The article reviews the war fighter needs and the current state of the art. The article then concludes by looking forward to where we are headed to provide the capabilities required.
Research and Development Annual Report, 1992
NASA Technical Reports Server (NTRS)
1993-01-01
Issued as a companion to Johnson Space Center's Research and Technology Annual Report, which reports JSC accomplishments under NASA Research and Technology Operating Plan (RTOP) funding, this report describes 42 additional JSC projects that are funded through sources other than the RTOP. Emerging technologies in four major disciplines are summarized: space systems technology, medical and life sciences, mission operations, and computer systems. Although these projects focus on support of human spacecraft design, development, and safety, most have wide civil and commercial applications in areas such as advanced materials, superconductors, advanced semiconductors, digital imaging, high density data storage, high performance computers, optoelectronics, artificial intelligence, robotics and automation, sensors, biotechnology, medical devices and diagnosis, and human factors engineering.
NASA Astrophysics Data System (ADS)
Cherkasov, Kirill V.; Gavrilova, Irina V.; Chernova, Elena V.; Dokolin, Andrey S.
2018-05-01
The article is devoted to reflection of separate aspects of intellectual system gesture recognition development. The peculiarity of the system is its intellectual block which completely based on open technologies: OpenCV library and Microsoft Cognitive Toolkit (CNTK) platform. The article presents the rationale for the choice of such set of tools, as well as the functional scheme of the system and the hierarchy of its modules. Experiments have shown that the system correctly recognizes about 85% of images received from sensors. The authors assume that the improvement of the algorithmic block of the system will increase the accuracy of gesture recognition up to 95%.
Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle
Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou
2012-01-01
This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.
The JSC Research and Development Annual Report 1993
NASA Technical Reports Server (NTRS)
1994-01-01
Issued as a companion to Johnson Space Center's Research and Technology Annual Report, which reports JSC accomplishments under NASA Research and Technology Operating Plan (RTOP) funding, this report describes 47 additional projects that are funded through sources other than the RTOP. Emerging technologies in four major disciplines are summarized: space systems technology, medical and life sciences, mission operations, and computer systems. Although these projects focus on support of human spacecraft design, development, and safety, most have wide civil and commercial applications in areas such as advanced materials, superconductors, advanced semiconductors, digital imaging, high density data storage, high performance computers, optoelectronics, artificial intelligence, robotics and automation, sensors, biotechnology, medical devices and diagnosis, and human factors engineering.
NASA Astrophysics Data System (ADS)
Sadat, Mojtaba T.; Viti, Francesco
2015-02-01
Machine vision is rapidly gaining popularity in the field of Intelligent Transportation Systems. In particular, advantages are foreseen by the exploitation of Aerial Vehicles (AV) in delivering a superior view on traffic phenomena. However, vibration on AVs makes it difficult to extract moving objects on the ground. To partly overcome this issue, image stabilization/registration procedures are adopted to correct and stitch multiple frames taken of the same scene but from different positions, angles, or sensors. In this study, we examine the impact of multiple feature-based techniques for stabilization, and we show that SURF detector outperforms the others in terms of time efficiency and output similarity.
Remote online monitoring and measuring system for civil engineering structures
NASA Astrophysics Data System (ADS)
Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł
2009-06-01
In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.
High Temperature Wireless Communication And Electronics For Harsh Environment Applications
NASA Technical Reports Server (NTRS)
Hunter, G. W.; Neudeck, P. G.; Beheim, G. M.; Ponchak, G. E.; Chen, L.-Y
2007-01-01
In order for future aerospace propulsion systems to meet the increasing requirements for decreased maintenance, improved capability, and increased safety, the inclusion of intelligence into the propulsion system design and operation becomes necessary. These propulsion systems will have to incorporate technology that will monitor propulsion component conditions, analyze the incoming data, and modify operating parameters to optimize propulsion system operations. This implies the development of sensors, actuators, and electronics, with associated packaging, that will be able to operate under the harsh environments present in an engine. However, given the harsh environments inherent in propulsion systems, the development of engine-compatible electronics and sensors is not straightforward. The ability of a sensor system to operate in a given environment often depends as much on the technologies supporting the sensor element as the element itself. If the supporting technology cannot handle the application, then no matter how good the sensor is itself, the sensor system will fail. An example is high temperature environments where supporting technologies are often not capable of operation in engine conditions. Further, for every sensor going into an engine environment, i.e., for every new piece of hardware that improves the in-situ intelligence of the components, communication wires almost always must follow. The communication wires may be within or between parts, or from the engine to the controller. As more hardware is added, more wires, weight, complexity, and potential for unreliability is also introduced. Thus, wireless communication combined with in-situ processing of data would significantly improve the ability to include sensors into high temperature systems and thus lead toward more intelligent engine systems. NASA Glenn Research Center (GRC) is presently leading the development of electronics, communication systems, and sensors capable of prolonged stable operation in harsh 500C environments. This has included world record operation of SiC-based transistor technology (including packaging) that has demonstrated continuous electrical operation at 500C for over 2000 hours. Based on SiC electronics, development of high temperature wireless communication has been on-going. This work has concentrated on maturing the SiC electronic devices for communication purposes as well as the passive components such as resistors and capacitors needed to enable a high temperature wireless system. The objective is to eliminate wires associated with high temperature sensors which add weight to a vehicle and can be a cause of sensor unreliability. This paper discusses the development of SiC based electronics and wireless communications technology for harsh environment applications such as propulsion health management systems and in Venus missions. A brief overview of the future directions in sensor technology is given including maturing of near-room temperature "Lick and Stick" leak sensor technology for possible implementation in the Crew Launch Vehicle program. Then an overview of high temperature electronics and the development of high temperature communication systems is presented. The maturity of related technologies such as sensor and packaging will also be discussed. It is concluded that a significant component of efforts to improve the intelligence of harsh environment operating systems is the development and implementation of high temperature wireless technology
HERA: A New Platform for Embedding Agents in Heterogeneous Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Alonso, Ricardo S.; de Paz, Juan F.; García, Óscar; Gil, Óscar; González, Angélica
Ambient Intelligence (AmI) based systems require the development of innovative solutions that integrate distributed intelligent systems with context-aware technologies. In this sense, Multi-Agent Systems (MAS) and Wireless Sensor Networks (WSN) are two key technologies for developing distributed systems based on AmI scenarios. This paper presents the new HERA (Hardware-Embedded Reactive Agents) platform, that allows using dynamic and self-adaptable heterogeneous WSNs on which agents are directly embedded on the wireless nodes This approach facilitates the inclusion of context-aware capabilities in AmI systems to gather data from their surrounding environments, achieving a higher level of ubiquitous and pervasive computing.
[Temperature Measurement with Bluetooth under Android Platform].
Wang, Shuai; Shen, Hao; Luo, Changze
2015-03-01
To realize the real-time transmission of temperature data and display using the platform of intelligent mobile phone and bluetooth. Application of Arduino Uno R3 in temperature data acquisition of digital temperature sensor DS18B20 acquisition, through the HC-05 bluetooth transmits the data to the intelligent smart phone Android system, realizes transmission of temperature data. Using Java language to write applications program under Android development environment, can achieve real-time temperature data display, storage and drawing temperature fluctuations drawn graphics. Temperature sensor is experimentally tested to meet the body temperature measurement precision and accuracy. This paper can provide a reference for other smart phone mobile medical product development.
ARL participation in the C4ISR OTM experiment: integration and performance results
NASA Astrophysics Data System (ADS)
Zong, Lei; O'Brien, Barry J.
2007-04-01
The Command, Control, Communication, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) On-The- Move (OTM) demonstration is an annual showcase of how innovative technologies can help modern troops increase their situational awareness (SA) in battlefield environments. To evaluate the effectiveness these new technologies have on the soldiers' abilities to gather situational information, the demonstration involves United States Army National Guard troops in realistic war game scenarios at an Army Reserve training ground. The Army Research Laboratory (ARL) was invited to participate in the event, with the objective demonstrating system-level integration of disparate technologies developed for gathering SA information in small unit combat operations. ARL provided expertise in Unattended Ground Sensing (UGS) technology, Unmanned Ground Vehicle (UGV) technology, information processing and wireless mobile ad hoc communication. The ARL C4ISR system included a system of multimodal sensors (MMS), a trip wire imager, a man-portable robotic vehicle (PackBot), and low power sensor radios for communication between an ARL system and a hosting platoon vehicle. This paper will focus on the integration effort of bringing the multiple families of sensor assets together into a working system.
What Does Neuroscience and Cognitive Psychology Tell Us about Multiple Intelligence
ERIC Educational Resources Information Center
Bauer, Richard H.
2009-01-01
Studies that have used noninvasive brain imaging techniques to record neocortical activity while individuals were performing cognitive intelligence tests (traditional intelligence) and social intelligence tests were reviewed. In cognitive intelligence tests 16 neocortical areas were active, whereas in social intelligence 10 areas were active.…
Adaptive Sampling in Autonomous Marine Sensor Networks
2006-06-01
Analog Processing Section A high-performance preamplifier with low noise characteristics is vital to obtaining quality sonar data. The preamplifier ...research assistantships through the Generic Ocean Array Technology Sonar (GOATS) project, contract N00014-97-1-0202 and contract N00014-05-G-0106 Delivery...Formation Behavior ..................................... 60 5 An AUV Intelligent Sensor for Real-Time Adaptive Sensing 63 5.1 A Logical Sonar Sensor
Intelligent self-organization methods for wireless ad hoc sensor networks based on limited resources
NASA Astrophysics Data System (ADS)
Hortos, William S.
2006-05-01
A wireless ad hoc sensor network (WSN) is a configuration for area surveillance that affords rapid, flexible deployment in arbitrary threat environments. There is no infrastructure support and sensor nodes communicate with each other only when they are in transmission range. To a greater degree than the terminals found in mobile ad hoc networks (MANETs) for communications, sensor nodes are resource-constrained, with limited computational processing, bandwidth, memory, and power, and are typically unattended once in operation. Consequently, the level of information exchange among nodes, to support any complex adaptive algorithms to establish network connectivity and optimize throughput, not only deplete those limited resources and creates high overhead in narrowband communications, but also increase network vulnerability to eavesdropping by malicious nodes. Cooperation among nodes, critical to the mission of sensor networks, can thus be disrupted by the inappropriate choice of the method for self-organization. Recent published contributions to the self-configuration of ad hoc sensor networks, e.g., self-organizing mapping and swarm intelligence techniques, have been based on the adaptive control of the cross-layer interactions found in MANET protocols to achieve one or more performance objectives: connectivity, intrusion resistance, power control, throughput, and delay. However, few studies have examined the performance of these algorithms when implemented with the limited resources of WSNs. In this paper, self-organization algorithms for the initiation, operation and maintenance of a network topology from a collection of wireless sensor nodes are proposed that improve the performance metrics significant to WSNs. The intelligent algorithm approach emphasizes low computational complexity, energy efficiency and robust adaptation to change, allowing distributed implementation with the actual limited resources of the cooperative nodes of the network. Extensions of the algorithms from flat topologies to two-tier hierarchies of sensor nodes are presented. Results from a few simulations of the proposed algorithms are compared to the published results of other approaches to sensor network self-organization in common scenarios. The estimated network lifetime and extent under static resource allocations are computed.
Evolution of an Intelligent Information Fusion System
NASA Technical Reports Server (NTRS)
Campbell, William J.; Cromp, Robert F.
1990-01-01
Consideration is given to the hardware and software needed to manage the enormous amount and complexity of data that the next generation of space-borne sensors will provide. An anthology is presented illustrating the evolution of artificial intelligence, science data processing, and management from the 1960s to the near future. Problems and limitations of technologies, data structures, data standards, and conceptual thinking are addressed. The development of an end-to-end Intelligent Information Fusion System that embodies knowledge of the user's domain-specific goals is proposed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.R.; Netrologic, Inc., San Diego, CA)
1988-01-01
Topics presented include integrating neural networks and expert systems, neural networks and signal processing, machine learning, cognition and avionics applications, artificial intelligence and man-machine interface issues, real time expert systems, artificial intelligence, and engineering applications. Also considered are advanced problem solving techniques, combinational optimization for scheduling and resource control, data fusion/sensor fusion, back propagation with momentum, shared weights and recurrency, automatic target recognition, cybernetics, optical neural networks.
Binary video codec for data reduction in wireless visual sensor networks
NASA Astrophysics Data System (ADS)
Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias
2013-02-01
Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.
Analysis of image thresholding segmentation algorithms based on swarm intelligence
NASA Astrophysics Data System (ADS)
Zhang, Yi; Lu, Kai; Gao, Yinghui; Yang, Bo
2013-03-01
Swarm intelligence-based image thresholding segmentation algorithms are playing an important role in the research field of image segmentation. In this paper, we briefly introduce the theories of four existing image segmentation algorithms based on swarm intelligence including fish swarm algorithm, artificial bee colony, bacteria foraging algorithm and particle swarm optimization. Then some image benchmarks are tested in order to show the differences of the segmentation accuracy, time consumption, convergence and robustness for Salt & Pepper noise and Gaussian noise of these four algorithms. Through these comparisons, this paper gives qualitative analyses for the performance variance of the four algorithms. The conclusions in this paper would give a significant guide for the actual image segmentation.
Abstract: As part of the Petroleum Refinery Risk and Technology Review, New Source Performance Standards rule, US EPA is proposing use of two-week passive sorbant tube fenceline monitoring for benzene. With recent technological advances, low-cost time-resolved sensors may become...
NASA Astrophysics Data System (ADS)
Mohammed, Ali Ibrahim Ali
The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to Parkinson's disease. We found that inhibition of motor cortex does not alter exaggerated beta oscillations in the striatum that are associated with parkinsonianism. Together, these results demonstrate the potential of developing integrated optogenetic system to advance our understanding of the principles underlying neural network computation, which would have broad applications from advancing artificial intelligence to disease diagnosis and treatment.
Fast Plasma Instrument for MMS: Data Compression Simulation Results
NASA Technical Reports Server (NTRS)
Barrie, A.; Adrian, Mark L.; Yeh, P.-S.; Winkert, G. E.; Lobell, J. V.; Vinas, A.F.; Simpson, D. J.; Moore, T. E.
2008-01-01
Magnetospheric Multiscale (MMS) mission will study small-scale reconnection structures and their rapid motions from closely spaced platforms using instruments capable of high angular, energy, and time resolution measurements. To meet these requirements, the Fast Plasma Instrument (FPI) consists of eight (8) identical half top-hat electron sensors and eights (8) identical ion sensors and an Instrument Data Processing Unit (IDPU). The sensors (electron or ion) are grouped into pairs whose 6 deg x 180 deg fields-of-view (FOV) are set 90 deg apart. Each sensor is equipped with electrostatic aperture steering to allow the sensor to scan a 45 deg x 180 deg fan about its nominal viewing (0 deg deflection) direction. Each pair of sensors, known as the Dual Electron Spectrometer (DES) and the Dual Ion Spectrometer (DIS), occupies a quadrant on the MMS spacecraft and the combination of the eight electron/ion sensors, employing aperture steering, image the full-sky every 30-ms (electrons) and 150-ms (ions), respectively. To probe the results in the DES complement of a given spacecraft generating 6.5-Mbs(exp -1) of electron data while the DIS generates 1.1-Mbs(exp -1) of ion data yielding an FPI total data rate of 6.6-MBs(exp -1). The FPI electron/ion data is collected by the IDPU then transmitted to the Central Data Instrument Processor (CIDP) on the spacecraft for science interest ranking. Only data sequences that contain the greatest amount of temporal/spatial structure will be intelligently down-linked by the spacecraft. Currently, the FPI data rate allocation to the CIDP is 1.5-Mbs(exp -1). Consequently, the FPI-IDPU must employ data/image compression to meet this CIDP telemetry allocation. Here, we present simulations of the CCSDS 122.0-B-1 algorithm-based compression of the FPI-DES electron data. Compression analysis is based upon a seed of re-processed Cluster/PEACE electron measurements. Topics to be discussed include: review of compression algorithm; data quality; data formatting/organization; and, implications for data/matrix pruning. To conclude a presentation of the base-lined FPI data compression approach is provided.
Video Diagnostic for W7-X Stellarator
NASA Astrophysics Data System (ADS)
Sárközi, J.; Grosser, K.; Kocsis, G.; König, R.; Neuner, U.; Molnár, Á.; Petravich, G.; Por, G.; Porempovics, G.; Récsei, S.; Szabó, V.; Szappanos, A.; Zoletnik, S.
2008-03-01
The video diagnostics for W7-X—which is under development—is devoted to observe plasma and frrst wall elements during operation, to warn in case of hot spots and dangerous heat load and to give information about the plasma size, position, edge structure, the geometry and location of magnetic islands and distribution of impurities. The video diagnostics will be mounted on the tangential AEQ-ports of the torus that are not straight and have about 2m length and a typical diameter of 0.1m which makes its realization more difficult. The geometry of the 10 tangential views of the AEQ-ports allows giving an almost complete overview of the vessel interior making this diagnostic indispensable for the machine operation. Different concepts of the diagnostics were investigated and finally the following design was selected. As a large heat load is expected on the optical window located at the plasma-facing end of the AEQ-port, the port window is protected by a cooled pinhole. An uncooled shutter located behind the pinhole can be closed to prevent window contamination during vessel conditioning discharges (glow discharge cleaning) and from inter-pulse deposition of soft a-C:H layers. The imaging optics and the detection sensor are located behind the port window in the port tube, which will be under atmospheric pressure. To detect the visible radiation distribution a new camera system called Event Detection Intelligent Camera (EDICAM) is under development. The system is divided into three major separated components. The Sensor Module contains only the selected CMOS sensor, the analog digital converters and the minimal electronics necessary for the communication with the subsequent camera system module called Image Processing and Control Unit (IPCU). Its simple structure makes the Sensor Module suitable to operate despite being exposed to ionizing (neutron, γ-) radiation. The IPCU, which can be located far from the Sensor Module and therefore far from the plasma, is designed to perform real time evaluation of the images detecting predefined events, managing the sensor read-out and the input triggers and producing the output triggers generated by the detected events. The IPCU can also be used to reduce the amount of the stored data. A Standard 10 Gigabit Ethernet fiber optics connection connects the IPCU module to the PC with GigEVision communication protocol.
Maritime Domain Awareness: C4I for the 1000 Ship Navy
2009-12-04
unit action, provide unit sensed contacts, coordinate unit operations, process unit information, release image , and release contact report, Figure 33...Intelligence Tasking Request Intelligence Summary Release Unit Person Incident Release Unit Vessel Incident Process Intelligence Tasking Release Image ...xi LIST OF FIGURES Figure 1. Functional Problem Sequence Process Flow. ....................................................4 Figure 2. United
Intelligent Agent Architectures: Reactive Planning Testbed
NASA Technical Reports Server (NTRS)
Rosenschein, Stanley J.; Kahn, Philip
1993-01-01
An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected.
Intelligent Propulsion System Foundation Technology: Summary of Research
NASA Technical Reports Server (NTRS)
Williams, James C.
2004-01-01
The purpose of this cooperative agreement was to develop a foundation of intelligent propulsion technologies for NASA and industry that will have an impact on safety, noise, emissions and cost. These intelligent engine technologies included sensors, electronics, communications, control logic, actuators, and smart materials and structures. Furthermore this cooperative agreement helped prepare future graduates to develop the revolutionary intelligent propulsion technologies that will be needed to ensure pre-eminence of the U.S. aerospace industry. The program consisted of three primary research areas (and associated work elements at Ohio universities): 1.0 Turbine Engine Prognostics, 2.0 Active Controls for Emissions and Noise Reduction, and 3.0 Active Structural Controls.
Compact SAR and Small Satellite Solutions for Earth Observation
NASA Astrophysics Data System (ADS)
LaRosa, M.; L'Abbate, M.
2016-12-01
Requirements for near and short term mission applications (Observation and Reconnaissance, SIGINT, Early Warning, Meteorology,..) are increasingly calling for spacecraft operational responsiveness, flexible configuration, lower cost satellite constellations and flying formations, to improve both the temporal performance of observation systems (revisit, response time) and the remote sensing techniques (distributed sensors, arrays, cooperative sensors). In answer to these users' needs, leading actors in Space Systems for EO are involved in development of Small and Microsatellites solutions. Thales Alenia Space (TAS) has started the "COMPACT-SAR" project to develop a SAR satellite characterized by low cost and reduced mass while providing, at the same time, high image quality in terms of resolution, swath size, and radiometric performance. Compact SAR will embark a X-band SAR based on a deployable reflector antenna fed by an active phased array feed. This concept allows high performance, providing capability of electronic beam steering both in azimuth and elevation planes, improving operational performance over a purely mechanically steered SAR system. Instrument provides both STRIPMAP and SPOTLIGHT modes, and thanks to very high gain antenna, can also provide a real maritime surveillance mode based on a patented Low PRF radar mode. Further developments are in progress considering missions based on Microsatellites technology, which can provide effective solutions for different user needs, such as Operational responsiveness, low cost constellations, distributed observation concept, flying formations, and can be conceived for applications in the field of Observation, Atmosphere sensing, Intelligence, Surveillance, Reconnaissance (ISR), Signal Intelligence. To satisfy these requirements, flexibility of small platforms is a key driver and especially new miniaturization technologies able to optimize the performance. An overview new micros-satellite (based on NIMBUS platform) and mission concepts is provided, such as passive SAR for multi-static imaging and tandem, Medium swath/medium resolution dual pol MICROSAR for in L-C-X band multi-application for maritime surveillance and land monitoring, applications for Space Debris monitoring, precision farming, Atmosphere sensing.
Markov logic network based complex event detection under uncertainty
NASA Astrophysics Data System (ADS)
Lu, Jingyang; Jia, Bin; Chen, Genshe; Chen, Hua-mei; Sullivan, Nichole; Pham, Khanh; Blasch, Erik
2018-05-01
In a cognitive reasoning system, the four-stage Observe-Orient-Decision-Act (OODA) reasoning loop is of interest. The OODA loop is essential for the situational awareness especially in heterogeneous data fusion. Cognitive reasoning for making decisions can take advantage of different formats of information such as symbolic observations, various real-world sensor readings, or the relationship between intelligent modalities. Markov Logic Network (MLN) provides mathematically sound technique in presenting and fusing data at multiple levels of abstraction, and across multiple intelligent sensors to conduct complex decision-making tasks. In this paper, a scenario about vehicle interaction is investigated, in which uncertainty is taken into consideration as no systematic approaches can perfectly characterize the complex event scenario. MLNs are applied to the terrestrial domain where the dynamic features and relationships among vehicles are captured through multiple sensors and information sources regarding the data uncertainty.
NASA Astrophysics Data System (ADS)
Antony, Joby; Mathuria, D. S.; Chaudhary, Anup; Datta, T. S.; Maity, T.
2017-02-01
Cryogenic network for linear accelerator operations demand a large number of Cryogenic sensors, associated instruments and other control-instrumentation to measure, monitor and control different cryogenic parameters remotely. Here we describe an alternate approach of six types of newly designed integrated intelligent cryogenic instruments called device-servers which has the complete circuitry for various sensor-front-end analog instrumentation and the common digital back-end http-server built together, to make crateless PLC-free model of controls and data acquisition. These identified instruments each sensor-specific viz. LHe server, LN2 Server, Control output server, Pressure server, Vacuum server and Temperature server are completely deployed over LAN for the cryogenic operations of IUAC linac (Inter University Accelerator Centre linear Accelerator), New Delhi. This indigenous design gives certain salient features like global connectivity, low cost due to crateless model, easy signal processing due to integrated design, less cabling and device-interconnectivity etc.
Towards a Bio-inspired Security Framework for Mission-Critical Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Ren, Wei; Song, Jun; Ma, Zhao; Huang, Shiyong
Mission-critical wireless sensor networks (WSNs) have been found in numerous promising applications in civil and military fields. However, the functionality of WSNs extensively relies on its security capability for detecting and defending sophisticated adversaries, such as Sybil, worm hole and mobile adversaries. In this paper, we propose a bio-inspired security framework to provide intelligence-enabled security mechanisms. This scheme is composed of a middleware, multiple agents and mobile agents. The agents monitor the network packets, host activities, make decisions and launch corresponding responses. Middleware performs an infrastructure for the communication between various agents and corresponding mobility. Certain cognitive models and intelligent algorithms such as Layered Reference Model of Brain and Self-Organizing Neural Network with Competitive Learning are explored in the context of sensor networks that have resource constraints. The security framework and implementation are also described in details.
A Self-Assessment Stereo Capture Model Applicable to the Internet of Things
Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing
2015-01-01
The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004
Implementation of Wireless and Intelligent Sensor Technologies in the Propulsion Test Environment
NASA Technical Reports Server (NTRS)
Solano, Wanda M.; Junell, Justin C.; Shumard, Kenneth
2003-01-01
From the first Saturn V rocket booster (S-II-T) testing in 1966 and the routine Space Shuttle Main Engine (SSME) testing beginning in 1975, to more recent test programs such as the X-33 Aerospike Engine, the Integrated Powerhead Development (IPD) program, and the Hybrid Sounding Rocket (HYSR), Stennis Space Center (SSC) continues to be a premier location for conducting large-scale propulsion testing. Central to each test program is the capability for sensor systems to deliver reliable measurements and high quality data, while also providing a means to monitor the test stand area to the highest degree of safety and sustainability. As part of an on-going effort to enhance the testing capabilities of Stennis Space Center, the Test Technology and Development group is developing and applying a number of wireless and intelligent sensor technologies in ways that are new to the test existing test environment.
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; Painho, M.
2017-09-01
The paper endeavours to enhance the Sensor Web with crucial geospatial analysis capabilities through integration with Spatial Data Infrastructure. The objective is development of automated smart cities intelligence system (SMACiSYS) with sensor-web access (SENSDI) utilizing geomatics for sustainable societies. There has been a need to develop automated integrated system to categorize events and issue information that reaches users directly. At present, no web-enabled information system exists which can disseminate messages after events evaluation in real time. Research work formalizes a notion of an integrated, independent, generalized, and automated geo-event analysing system making use of geo-spatial data under popular usage platform. Integrating Sensor Web With Spatial Data Infrastructures (SENSDI) aims to extend SDIs with sensor web enablement, converging geospatial and built infrastructure, and implement test cases with sensor data and SDI. The other benefit, conversely, is the expansion of spatial data infrastructure to utilize sensor web, dynamically and in real time for smart applications that smarter cities demand nowadays. Hence, SENSDI augments existing smart cities platforms utilizing sensor web and spatial information achieved by coupling pairs of otherwise disjoint interfaces and APIs formulated by Open Geospatial Consortium (OGC) keeping entire platform open access and open source. SENSDI is based on Geonode, QGIS and Java, that bind most of the functionalities of Internet, sensor web and nowadays Internet of Things superseding Internet of Sensors as well. In a nutshell, the project delivers a generalized real-time accessible and analysable platform for sensing the environment and mapping the captured information for optimal decision-making and societal benefit.
Some Defence Applications of Civilian Remote Sensing Satellite Images
1993-11-01
This report is on a pilot study to demonstrate some of the capabilities of remote sensing in intelligence gathering. A wide variety of issues, both...colour images. The procedure will be presented in a companion report. Remote sensing , Satellite imagery, Image analysis, Military applications, Military intelligence.
Automatic food detection in egocentric images using artificial intelligence technology
USDA-ARS?s Scientific Manuscript database
Our objective was to develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable devic...
al-Rifaie, Mohammad Majid; Aber, Ahmed; Hemanth, Duraiswamy Jude
2015-12-01
This study proposes an umbrella deployment of swarm intelligence algorithm, such as stochastic diffusion search for medical imaging applications. After summarising the results of some previous works which shows how the algorithm assists in the identification of metastasis in bone scans and microcalcifications on mammographs, for the first time, the use of the algorithm in assessing the CT images of the aorta is demonstrated along with its performance in detecting the nasogastric tube in chest X-ray. The swarm intelligence algorithm presented in this study is adapted to address these particular tasks and its functionality is investigated by running the swarms on sample CT images and X-rays whose status have been determined by senior radiologists. In addition, a hybrid swarm intelligence-learning vector quantisation (LVQ) approach is proposed in the context of magnetic resonance (MR) brain image segmentation. The particle swarm optimisation is used to train the LVQ which eliminates the iteration-dependent nature of LVQ. The proposed methodology is used to detect the tumour regions in the abnormal MR brain images.
Zhang, Jun; Tian, Gui Yun; Marindra, Adi M. J.; Sunny, Ali Imam; Zhao, Ao Bo
2017-01-01
In recent few years, the antenna and sensor communities have witnessed a considerable integration of radio frequency identification (RFID) tag antennas and sensors because of the impetus provided by internet of things (IoT) and cyber-physical systems (CPS). Such types of sensor can find potential applications in structural health monitoring (SHM) because of their passive, wireless, simple, compact size, and multimodal nature, particular in large scale infrastructures during their lifecycle. The big data from these ubiquitous sensors are expected to generate a big impact for intelligent monitoring. A remarkable number of scientific papers demonstrate the possibility that objects can be remotely tracked and intelligently monitored for their physical/chemical/mechanical properties and environment conditions. Most of the work focuses on antenna design, and significant information has been generated to demonstrate feasibilities. Further information is needed to gain deep understanding of the passive RFID antenna sensor systems in order to make them reliable and practical. Nevertheless, this information is scattered over much literature. This paper is to comprehensively summarize and clearly highlight the challenges and state-of-the-art methods of passive RFID antenna sensors and systems in terms of sensing and communication from system point of view. Future trends are also discussed. The future research and development in UK are suggested as well. PMID:28146067
Performance Analysis of Cluster Formation in Wireless Sensor Networks.
Montiel, Edgar Romo; Rivero-Angeles, Mario E; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo
2017-12-13
Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes.
Intelligent Predictor of Energy Expenditure with the Use of Patch-Type Sensor Module
Li, Meina; Kwak, Keun-Chang; Kim, Youn-Tae
2012-01-01
This paper is concerned with an intelligent predictor of energy expenditure (EE) using a developed patch-type sensor module for wireless monitoring of heart rate (HR) and movement index (MI). For this purpose, an intelligent predictor is designed by an advanced linguistic model (LM) with interval prediction based on fuzzy granulation that can be realized by context-based fuzzy c-means (CFCM) clustering. The system components consist of a sensor board, the rubber case, and the communication module with built-in analysis algorithm. This sensor is patched onto the user's chest to obtain physiological data in indoor and outdoor environments. The prediction performance was demonstrated by root mean square error (RMSE). The prediction performance was obtained as the number of contexts and clusters increased from 2 to 6, respectively. Thirty participants were recruited from Chosun University to take part in this study. The data sets were recorded during normal walking, brisk walking, slow running, and jogging in an outdoor environment and treadmill running in an indoor environment, respectively. We randomly divided the data set into training (60%) and test data set (40%) in the normalized space during 10 iterations. The training data set is used for model construction, while the test set is used for model validation. The experimental results revealed that the prediction error on treadmill running simulation was improved by about 51% and 12% in comparison to conventional LM for training and checking data set, respectively. PMID:23202166
Performance Analysis of Cluster Formation in Wireless Sensor Networks
Montiel, Edgar Romo; Rivero-Angeles, Mario E.; Rubino, Gerardo; Molina-Lozano, Heron; Menchaca-Mendez, Rolando; Menchaca-Mendez, Ricardo
2017-01-01
Clustered-based wireless sensor networks have been extensively used in the literature in order to achieve considerable energy consumption reductions. However, two aspects of such systems have been largely overlooked. Namely, the transmission probability used during the cluster formation phase and the way in which cluster heads are selected. Both of these issues have an important impact on the performance of the system. For the former, it is common to consider that sensor nodes in a clustered-based Wireless Sensor Network (WSN) use a fixed transmission probability to send control data in order to build the clusters. However, due to the highly variable conditions experienced by these networks, a fixed transmission probability may lead to extra energy consumption. In view of this, three different transmission probability strategies are studied: optimal, fixed and adaptive. In this context, we also investigate cluster head selection schemes, specifically, we consider two intelligent schemes based on the fuzzy C-means and k-medoids algorithms and a random selection with no intelligence. We show that the use of intelligent schemes greatly improves the performance of the system, but their use entails higher complexity and selection delay. The main performance metrics considered in this work are energy consumption, successful transmission probability and cluster formation latency. As an additional feature of this work, we study the effect of errors in the wireless channel and the impact on the performance of the system under the different transmission probability schemes. PMID:29236065
Comments on airborne ISR radar utilization
NASA Astrophysics Data System (ADS)
Doerry, A. W.
2016-05-01
A sensor/payload operator for modern multi-sensor multi-mode Intelligence, Surveillance, and Reconnaissance (ISR) platforms is often confronted with a plethora of options in sensors and sensor modes. This often leads an over-worked operator to down-select to favorite sensors and modes; for example a justifiably favorite Full Motion Video (FMV) sensor at the expense of radar modes, even if radar modes can offer unique and advantageous information. At best, sensors might be used in a serial monogamous fashion with some cross-cueing. The challenge is then to increase the utilization of the radar modes in a manner attractive to the sensor/payload operator. We propose that this is best accomplished by combining sensor modes and displays into `super-modes'.
Smart unattended sensor networks with scene understanding capabilities
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2006-05-01
Unattended sensor systems are new technologies that are supposed to provide enhanced situation awareness to military and law enforcement agencies. A network of such sensors cannot be very effective in field conditions only if it can transmit visual information to human operators or alert them on motion. In the real field conditions, events may happen in many nodes of a network simultaneously. But the real number of control personnel is always limited, and attention of human operators can be simply attracted to particular network nodes, while more dangerous threat may be unnoticed at the same time in the other nodes. Sensor networks would be more effective if equipped with a system that is similar to human vision in its abilities to understand visual information. Human vision uses for that a rough but wide peripheral system that tracks motions and regions of interests, narrow but precise foveal vision that analyzes and recognizes objects in the center of selected region of interest, and visual intelligence that provides scene and object contexts and resolves ambiguity and uncertainty in the visual information. Biologically-inspired Network-Symbolic models convert image information into an 'understandable' Network-Symbolic format, which is similar to relational knowledge models. The equivalent of interaction between peripheral and foveal systems in the network-symbolic system is achieved via interaction between Visual and Object Buffers and the top-level knowledge system.
Towards an Intelligent Planning Knowledge Base Development Environment
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
ract describes work in developing knowledge base editing and debugging tools for the Multimission VICAR Planner (MVP) system. MVP uses artificial intelligence planning techniques to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing requests made to the JPL Multimission Image Processing Laboratory.
A Study of Lane Detection Algorithm for Personal Vehicle
NASA Astrophysics Data System (ADS)
Kobayashi, Kazuyuki; Watanabe, Kajiro; Ohkubo, Tomoyuki; Kurihara, Yosuke
By the word “Personal vehicle”, we mean a simple and lightweight vehicle expected to emerge as personal ground transportation devices. The motorcycle, electric wheelchair, motor-powered bicycle, etc. are examples of the personal vehicle and have been developed as the useful for transportation for a personal use. Recently, a new types of intelligent personal vehicle called the Segway has been developed which is controlled and stabilized by using on-board intelligent multiple sensors. The demand for needs for such personal vehicles are increasing, 1) to enhance human mobility, 2) to support mobility for elderly person, 3) reduction of environmental burdens. Since rapidly growing personal vehicles' market, a number of accidents caused by human error is also increasing. The accidents are caused by it's drive ability. To enhance or support drive ability as well as to prevent accidents, intelligent assistance is necessary. One of most important elemental functions for personal vehicle is robust lane detection. In this paper, we develop a robust lane detection method for personal vehicle at outdoor environments. The proposed lane detection method employing a 360 degree omni directional camera and unique robust image processing algorithm. In order to detect lanes, combination of template matching technique and Hough transform are employed. The validity of proposed lane detection algorithm is confirmed by actual developed vehicle at various type of sunshined outdoor conditions.
Biological Weapons -- Still a Relevant Threat
2012-03-22
destruction in general, and biological weapons in particular. The IHS Janes: Defence and Security Intelligence & Analysis website notes that a number of...responder capabilities, and intelligence agency inputs. There needs, as well, to be continued research and development of sensor technologies, which...Mass Destruction – Radiological, Chemical and Biological,‖ 109 10 Mark, J. Carson; Taylor, Theodore; Eyster, Eugene; Maraman, William; Wechsler
NASA Astrophysics Data System (ADS)
Romanosky, Robert R.
2017-05-01
he National Energy Technology Laboratory (NETL) under the Department of Energy (DOE) Fossil Energy (FE) Program is leading the effort to not only develop near zero emission power generation systems, but to increaser the efficiency and availability of current power systems. The overarching goal of the program is to provide clean affordable power using domestic resources. Highly efficient, low emission power systems can have extreme conditions of high temperatures up to 1600 oC, high pressures up to 600 psi, high particulate loadings, and corrosive atmospheres that require monitoring. Sensing in these harsh environments can provide key information that directly impacts process control and system reliability. The lack of suitable measurement technology serves as a driver for the innovations in harsh environment sensor development. Advancements in sensing using optical fibers are key efforts within NETL's sensor development program as these approaches offer the potential to survive and provide critical information about these processes. An overview of the sensor development supported by the National Energy Technology Laboratory (NETL) will be given, including research in the areas of sensor materials, designs, and measurement types. New approaches to intelligent sensing, sensor placement and process control using networked sensors will be discussed as will novel approaches to fiber device design concurrent with materials development research and development in modified and coated silica and sapphire fiber based sensors. The use of these sensors for both single point and distributed measurements of temperature, pressure, strain, and a select suite of gases will be addressed. Additional areas of research includes novel control architecture and communication frameworks, device integration for distributed sensing, and imaging and other novel approaches to monitoring and controlling advanced processes. The close coupling of the sensor program with process modeling and control will be discussed for the overarching goal of clean power production.
Design of multi-function sensor detection system in coal mine based on ARM
NASA Astrophysics Data System (ADS)
Ge, Yan-Xiang; Zhang, Quan-Zhu; Deng, Yong-Hong
2017-06-01
The traditional coal mine sensor in the specific measurement points, the number and type of channel will be greater than or less than the number of monitoring points, resulting in a waste of resources or cannot meet the application requirements, in order to enable the sensor to adapt to the needs of different occasions and reduce the cost, a kind of multi-functional intelligent sensor multiple sensors and ARM11 the S3C6410 processor is used to design and realize the dust, gas, temperature and humidity sensor functions together, and has storage, display, voice, pictures, data query, alarm and other new functions.
An open and reconfigurable wireless sensor network for pervasive health monitoring.
Triantafyllidis, A; Koutkias, V; Chouvarda, I; Maglaveras, N
2008-01-01
Sensor networks constitute the backbone for the construction of personalized monitoring systems. Up to now, several sensor networks have been proposed for diverse pervasive healthcare applications, which are however characterized by a significant lack of open architectures, resulting in closed, non-interoperable and difficult to extend solutions. In this context, we propose an open and reconfigurable wireless sensor network (WSN) for pervasive health monitoring, with particular emphasis in its easy extension with additional sensors and functionality by incorporating embedded intelligence mechanisms. We consider a generic WSN architecture comprised of diverse sensor nodes (with communication and processing capabilities) and a mobile base unit (MBU) operating as the gateway between the sensors and the medical personnel, formulating this way a body area network (BAN). The primary focus of this work is on the intra-BAN data communication issues, adopting SensorML as the data representation mean, including the encoding of the monitoring patterns and the functionality of the sensor network. In our prototype implementation two sensor nodes are emulated; one for heart rate monitoring and the other for blood glucose observations, while the MBU corresponds to a personal digital assistant (PDA) device. Java 2 Micro Edition (J2ME) is used to implement both the sensor nodes and the MBU components. Intra-BAN wireless communication relies on the Blue-tooth protocol. Via an adaptive user interface in the MBU, health professionals may specify the monitoring parameters of the WSN and define the monitoring patterns of interest in terms of rules. This work constitutes an essential step towards the construction of open, extensible, inter-operable and intelligent WSNs for pervasive health monitoring.
Development of intelligent robots - Achievements and issues
NASA Astrophysics Data System (ADS)
Nitzan, D.
1985-03-01
A flexible, intelligent robot is regarded as a general purpose machine system that may include effectors, sensors, computers, and auxiliary equipment and, like a human, can perform a variety of tasks under unpredictable conditions. Development of intelligent robots is essential for increasing the growth rate of today's robot population in industry and elsewhere. Robotics research and development topics include manipulation, end effectors, mobility, sensing (noncontact and contact), adaptive control, robot programming languages, and manufacturing process planning. Past achievements and current issues related to each of these topics are described briefly.
Pressure sensor to determine spatial pressure distributions on boundary layer flows
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Piroozan, Parham; Corke, Thomas C.
1997-03-01
The determination of pressures along the surface of a wind tunnel proves difficult with methods that must introduce devices into the flow stream. This paper presents a sensor that is part of the wall. A special interferometric reflection moire technique is developed and used to produce signals that measures pressure both in static and dynamic settings. The sensor developed is an intelligent sensor that combines optics and electronics to analyze the pressure patterns. The sensor provides the input to a control system that is capable of modifying the shape of the wall and preserve the stability of the flow.
Student's Uncertainty Modeling through a Multimodal Sensor-Based Approach
ERIC Educational Resources Information Center
Jraidi, Imene; Frasson, Claude
2013-01-01
Detecting the student internal state during learning is a key construct in educational environment and particularly in Intelligent Tutoring Systems (ITS). Students' uncertainty is of primary interest as it is deeply rooted in the process of knowledge construction. In this paper we propose a new sensor-based multimodal approach to model…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S
We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed in a distributed environment composed of both wireless sensor networks for data collection and wired computer networks for data processing in support of homeland security defense. We present the system framework and formulate the analytical problems and develop approximate or exact solutions for the subtasks: (i) sensor deployment strategy based on a two-dimensional genetic algorithm to achieve maximum coverage with cost constraints; (ii) data routing scheme tomore » achieve maximum signal strength with minimum path loss, high energy efficiency, and effective fault tolerance; (iii) network mapping method to assign computing modules to network nodes for high-performance distributed data processing; and (iv) binary decision fusion rule that derive threshold bounds to improve system hit rate and false alarm rate. These component solutions are implemented and evaluated through either experiments or simulations in various application scenarios. The extensive results demonstrate that these component solutions imbue the integrated system with the desirable and useful quality of intelligence in decision making.« less
Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT
Nguyen, Thu L. N.; Shin, Yoan
2016-01-01
Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378
Energy-efficient hierarchical processing in the network of wireless intelligent sensors (WISE)
NASA Astrophysics Data System (ADS)
Raskovic, Dejan
Sensor network nodes have benefited from technological advances in the field of wireless communication, processing, and power sources. However, the processing power of microcontrollers is often not sufficient to perform sophisticated processing, while the power requirements of digital signal processing boards or handheld computers are usually too demanding for prolonged system use. We are matching the intrinsic hierarchical nature of many digital signal-processing applications with the natural hierarchy in distributed wireless networks, and building the hierarchical system of wireless intelligent sensors. Our goal is to build a system that will exploit the hierarchical organization to optimize the power consumption and extend battery life for the given time and memory constraints, while providing real-time processing of sensor signals. In addition, we are designing our system to be able to adapt to the current state of the environment, by dynamically changing the algorithm through procedure replacement. This dissertation presents the analysis of hierarchical environment and methods for energy profiling used to evaluate different system design strategies, and to optimize time-effective and energy-efficient processing.
Molecular robots with sensors and intelligence.
Hagiya, Masami; Konagaya, Akihiko; Kobayashi, Satoshi; Saito, Hirohide; Murata, Satoshi
2014-06-17
CONSPECTUS: What we can call a molecular robot is a set of molecular devices such as sensors, logic gates, and actuators integrated into a consistent system. The molecular robot is supposed to react autonomously to its environment by receiving molecular signals and making decisions by molecular computation. Building such a system has long been a dream of scientists; however, despite extensive efforts, systems having all three functions (sensing, computation, and actuation) have not been realized yet. This Account introduces an ongoing research project that focuses on the development of molecular robotics funded by MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan). This 5 year project started in July 2012 and is titled "Development of Molecular Robots Equipped with Sensors and Intelligence". The major issues in the field of molecular robotics all correspond to a feedback (i.e., plan-do-see) cycle of a robotic system. More specifically, these issues are (1) developing molecular sensors capable of handling a wide array of signals, (2) developing amplification methods of signals to drive molecular computing devices, (3) accelerating molecular computing, (4) developing actuators that are controllable by molecular computers, and (5) providing bodies of molecular robots encapsulating the above molecular devices, which implement the conformational changes and locomotion of the robots. In this Account, the latest contributions to the project are reported. There are four research teams in the project that specialize on sensing, intelligence, amoeba-like actuation, and slime-like actuation, respectively. The molecular sensor team is focusing on the development of molecular sensors that can handle a variety of signals. This team is also investigating methods to amplify signals from the molecular sensors. The molecular intelligence team is developing molecular computers and is currently focusing on a new photochemical technology for accelerating DNA-based computations. They also introduce novel computational models behind various kinds of molecular computers necessary for designing such computers. The amoeba robot team aims at constructing amoeba-like robots. The team is trying to incorporate motor proteins, including kinesin and microtubules (MTs), for use as actuators implemented in a liposomal compartment as a robot body. They are also developing a methodology to link DNA-based computation and molecular motor control. The slime robot team focuses on the development of slime-like robots. The team is evaluating various gels, including DNA gel and BZ gel, for use as actuators, as well as the body material to disperse various molecular devices in it. They also try to control the gel actuators by DNA signals coming from molecular computers.
Soft Thermal Sensor with Mechanical Adaptability.
Yang, Hui; Qi, Dianpeng; Liu, Zhiyuan; Chandran, Bevita K; Wang, Ting; Yu, Jiancan; Chen, Xiaodong
2016-11-01
A soft thermal sensor with mechanical adaptability is fabricated by the combination of single-wall carbon nanotubes with carboxyl groups and self-healing polymers. This study demonstrates that this soft sensor has excellent thermal response and mechanical adaptability. It shows tremendous promise for improving the service life of soft artificial-intelligence robots and protecting thermally sensitive electronics from the risk of damage by high temperature. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
[The application and development of artificial intelligence in medical diagnosis systems].
Chen, Zhencheng; Jiang, Yong; Xu, Mingyu; Wang, Hongyan; Jiang, Dazong
2002-09-01
This paper has reviewed the development of artificial intelligence in medical practice and medical diagnostic expert systems, and has summarized the application of artificial neural network. It explains that a source of difficulty in medical diagnostic system is the co-existence of multiple diseases--the potentially inter-related diseases. However, the difficulty of image expert systems is inherent in high-level vision. And it increases the complexity of expert system in medical image. At last, the prospect for the development of artificial intelligence in medical image expert systems is made.
Design and Simulation Test of an Open D-Dot Voltage Sensor
Bai, Yunjie; Wang, Jingang; Wei, Gang; Yang, Yongming
2015-01-01
Nowadays, sensor development focuses on miniaturization and non-contact measurement. According to the D-dot principle, a D-dot voltage sensor with a new structure was designed based on the differential D-dot sensor with a symmetrical structure, called an asymmetric open D-dot voltage sensor. It is easier to install. The electric field distribution of the sensor was analyzed through Ansoft Maxwell and an open D-dot voltage sensor was designed. This open D-voltage sensor is characteristic of accessible insulating strength and small electric field distortion. The steady and transient performance test under 10 kV-voltage reported satisfying performances of the designed open D-dot voltage sensor. It conforms to requirements for a smart grid measuring sensor in intelligence, miniaturization and facilitation. PMID:26393590
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zhou, Zude; Liu, Quan; Xu, Wenjun
2017-02-01
Due to the advantages of being able to function under harsh environmental conditions and serving as a distributed condition information source in a networked monitoring system, the fibre Bragg grating (FBG) sensor network has attracted considerable attention for equipment online condition monitoring. To provide an overall conditional view of the mechanical equipment operation, a networked service-oriented condition monitoring framework based on FBG sensing is proposed, together with an intelligent matching method for supporting monitoring service management. In the novel framework, three classes of progressive service matching approaches, including service-chain knowledge database service matching, multi-objective constrained service matching and workflow-driven human-interactive service matching, are developed and integrated with an enhanced particle swarm optimisation (PSO) algorithm as well as a workflow-driven mechanism. Moreover, the manufacturing domain ontology, FBG sensor network structure and monitoring object are considered to facilitate the automatic matching of condition monitoring services to overcome the limitations of traditional service processing methods. The experimental results demonstrate that FBG monitoring services can be selected intelligently, and the developed condition monitoring system can be re-built rapidly as new equipment joins the framework. The effectiveness of the service matching method is also verified by implementing a prototype system together with its performance analysis.
ELIPS: Toward a Sensor Fusion Processor on a Chip
NASA Technical Reports Server (NTRS)
Daud, Taher; Stoica, Adrian; Tyson, Thomas; Li, Wei-te; Fabunmi, James
1998-01-01
The paper presents the concept and initial tests from the hardware implementation of a low-power, high-speed reconfigurable sensor fusion processor. The Extended Logic Intelligent Processing System (ELIPS) processor is developed to seamlessly combine rule-based systems, fuzzy logic, and neural networks to achieve parallel fusion of sensor in compact low power VLSI. The first demonstration of the ELIPS concept targets interceptor functionality; other applications, mainly in robotics and autonomous systems are considered for the future. The main assumption behind ELIPS is that fuzzy, rule-based and neural forms of computation can serve as the main primitives of an "intelligent" processor. Thus, in the same way classic processors are designed to optimize the hardware implementation of a set of fundamental operations, ELIPS is developed as an efficient implementation of computational intelligence primitives, and relies on a set of fuzzy set, fuzzy inference and neural modules, built in programmable analog hardware. The hardware programmability allows the processor to reconfigure into different machines, taking the most efficient hardware implementation during each phase of information processing. Following software demonstrations on several interceptor data, three important ELIPS building blocks (a fuzzy set preprocessor, a rule-based fuzzy system and a neural network) have been fabricated in analog VLSI hardware and demonstrated microsecond-processing times.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Sandino, Juan; Pegg, Geoff; Gonzalez, Felipe; Smith, Grant
2018-03-22
The environmental and economic impacts of exotic fungal species on natural and plantation forests have been historically catastrophic. Recorded surveillance and control actions are challenging because they are costly, time-consuming, and hazardous in remote areas. Prolonged periods of testing and observation of site-based tests have limitations in verifying the rapid proliferation of exotic pathogens and deterioration rates in hosts. Recent remote sensing approaches have offered fast, broad-scale, and affordable surveys as well as additional indicators that can complement on-ground tests. This paper proposes a framework that consolidates site-based insights and remote sensing capabilities to detect and segment deteriorations by fungal pathogens in natural and plantation forests. This approach is illustrated with an experimentation case of myrtle rust ( Austropuccinia psidii ) on paperbark tea trees ( Melaleuca quinquenervia ) in New South Wales (NSW), Australia. The method integrates unmanned aerial vehicles (UAVs), hyperspectral image sensors, and data processing algorithms using machine learning. Imagery is acquired using a Headwall Nano-Hyperspec ® camera, orthorectified in Headwall SpectralView ® , and processed in Python programming language using eXtreme Gradient Boosting (XGBoost), Geospatial Data Abstraction Library (GDAL), and Scikit-learn third-party libraries. In total, 11,385 samples were extracted and labelled into five classes: two classes for deterioration status and three classes for background objects. Insights reveal individual detection rates of 95% for healthy trees, 97% for deteriorated trees, and a global multiclass detection rate of 97%. The methodology is versatile to be applied to additional datasets taken with different image sensors, and the processing of large datasets with freeware tools.
2018-01-01
The environmental and economic impacts of exotic fungal species on natural and plantation forests have been historically catastrophic. Recorded surveillance and control actions are challenging because they are costly, time-consuming, and hazardous in remote areas. Prolonged periods of testing and observation of site-based tests have limitations in verifying the rapid proliferation of exotic pathogens and deterioration rates in hosts. Recent remote sensing approaches have offered fast, broad-scale, and affordable surveys as well as additional indicators that can complement on-ground tests. This paper proposes a framework that consolidates site-based insights and remote sensing capabilities to detect and segment deteriorations by fungal pathogens in natural and plantation forests. This approach is illustrated with an experimentation case of myrtle rust (Austropuccinia psidii) on paperbark tea trees (Melaleuca quinquenervia) in New South Wales (NSW), Australia. The method integrates unmanned aerial vehicles (UAVs), hyperspectral image sensors, and data processing algorithms using machine learning. Imagery is acquired using a Headwall Nano-Hyperspec® camera, orthorectified in Headwall SpectralView®, and processed in Python programming language using eXtreme Gradient Boosting (XGBoost), Geospatial Data Abstraction Library (GDAL), and Scikit-learn third-party libraries. In total, 11,385 samples were extracted and labelled into five classes: two classes for deterioration status and three classes for background objects. Insights reveal individual detection rates of 95% for healthy trees, 97% for deteriorated trees, and a global multiclass detection rate of 97%. The methodology is versatile to be applied to additional datasets taken with different image sensors, and the processing of large datasets with freeware tools. PMID:29565822
A novel lightweight Fizeau infrared interferometric imaging system
NASA Astrophysics Data System (ADS)
Hope, Douglas A.; Hart, Michael; Warner, Steve; Durney, Oli; Romeo, Robert
2016-05-01
Aperture synthesis imaging techniques using an interferometer provide a means to achieve imagery with spatial resolution equivalent to a conventional filled aperture telescope at a significantly reduced size, weight and cost, an important implication for air- and space-borne persistent observing platforms. These concepts have been realized in SIRII (Space-based IR-imaging interferometer), a new light-weight, compact SWIR and MWIR imaging interferometer designed for space-based surveillance. The sensor design is configured as a six-element Fizeau interferometer; it is scalable, light-weight, and uses structural components and main optics made of carbon fiber replicated polymer (CFRP) that are easy to fabricate and inexpensive. A three-element prototype of the SIRII imager has been constructed. The optics, detectors, and interferometric signal processing principles draw on experience developed in ground-based astronomical applications designed to yield the highest sensitivity and resolution with cost-effective optical solutions. SIRII is being designed for technical intelligence from geo-stationary orbit. It has an instantaneous 6 x 6 mrad FOV and the ability to rapidly scan a 6x6 deg FOV, with a minimal SNR. The interferometric design can be scaled to larger equivalent filled aperture, while minimizing weight and costs when compared to a filled aperture telescope with equivalent resolution. This scalability in SIRII allows it address a range of IR-imaging scenarios.
A system for intelligent teleoperation research
NASA Technical Reports Server (NTRS)
Orlando, N. E.
1983-01-01
The Automation Technology Branch of NASA Langley Research Center is developing a research capability in the field of artificial intelligence, particularly as applicable in teleoperator/robotics development for remote space operations. As a testbed for experimentation in these areas, a system concept has been developed and is being implemented. This system termed DAISIE (Distributed Artificially Intelligent System for Interacting with the Environment), interfaces the key processes of perception, reasoning, and manipulation by linking hardware sensors and manipulators to a modular artificial intelligence (AI) software system in a hierarchical control structure. Verification experiments have been performed: one experiment used a blocksworld database and planner embedded in the DAISIE system to intelligently manipulate a simple physical environment; the other experiment implemented a joint-space collision avoidance algorithm. Continued system development is planned.
Bharucha, Ashok J.; Anand, Vivek; Forlizzi, Jodi; Dew, Mary Amanda; Reynolds, Charles F.; Stevens, Scott; Wactlar, Howard
2009-01-01
The number of older Americans afflicted by Alzheimer disease and related dementias will triple to 13 million persons by 2050, thus greatly increasing healthcare needs. An approach to this emerging crisis is the development and deployment of intelligent assistive technologies that compensate for the specific physical and cognitive deficits of older adults with dementia, and thereby also reduce caregiver burden. The authors conducted an extensive search of the computer science, engineering, and medical databases to review intelligent cognitive devices, physiologic and environmental sensors, and advanced integrated sensor networks that may find future applications in dementia care. Review of the extant literature reveals an overwhelming focus on the physical disability of younger persons with typically nonprogressive anoxic and traumatic brain injuries, with few clinical studies specifically involving persons with dementia. A discussion of the specific capabilities, strengths, and limitations of each technology is followed by an overview of research methodological challenges that must be addressed to achieve measurable progress to meet the healthcare needs of an aging America. PMID:18849532
Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur
2012-01-01
This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.
Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur
2012-01-01
This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions. PMID:23112650
Chen, Yuanfang; Lee, Gyu Myoung; Shu, Lei; Crespi, Noel
2016-02-06
The development of an efficient and cost-effective solution to solve a complex problem (e.g., dynamic detection of toxic gases) is an important research issue in the industrial applications of the Internet of Things (IoT). An industrial intelligent ecosystem enables the collection of massive data from the various devices (e.g., sensor-embedded wireless devices) dynamically collaborating with humans. Effectively collaborative analytics based on the collected massive data from humans and devices is quite essential to improve the efficiency of industrial production/service. In this study, we propose a collaborative sensing intelligence (CSI) framework, combining collaborative intelligence and industrial sensing intelligence. The proposed CSI facilitates the cooperativity of analytics with integrating massive spatio-temporal data from different sources and time points. To deploy the CSI for achieving intelligent and efficient industrial production/service, the key challenges and open issues are discussed, as well.
Chen, Yuanfang; Lee, Gyu Myoung; Shu, Lei; Crespi, Noel
2016-01-01
The development of an efficient and cost-effective solution to solve a complex problem (e.g., dynamic detection of toxic gases) is an important research issue in the industrial applications of the Internet of Things (IoT). An industrial intelligent ecosystem enables the collection of massive data from the various devices (e.g., sensor-embedded wireless devices) dynamically collaborating with humans. Effectively collaborative analytics based on the collected massive data from humans and devices is quite essential to improve the efficiency of industrial production/service. In this study, we propose a collaborative sensing intelligence (CSI) framework, combining collaborative intelligence and industrial sensing intelligence. The proposed CSI facilitates the cooperativity of analytics with integrating massive spatio-temporal data from different sources and time points. To deploy the CSI for achieving intelligent and efficient industrial production/service, the key challenges and open issues are discussed, as well. PMID:26861345
Intelligent Traffic Light Based on PLC Control
NASA Astrophysics Data System (ADS)
Mei, Lin; Zhang, Lijian; Wang, Lingling
2017-11-01
The traditional traffic light system with a fixed control mode and single control function is contradicted with the current traffic section. The traditional one has been unable to meet the functional requirements of the existing flexible traffic control system. This paper research and develop an intelligent traffic light called PLC control system. It uses PLC as control core, using a sensor module for receiving real-time information of vehicles, traffic control mode for information to select the traffic lights. Of which control mode is flexible and changeable, and it also set the countdown reminder to improve the effectiveness of traffic lights, which can realize the goal of intelligent traffic diversion, intelligent traffic diversion.
A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.
1993-01-01
A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.
Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.
2016-01-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941
Robust algebraic image enhancement for intelligent control systems
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morrelli, Michael
1993-01-01
Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.
Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch
Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed
2012-01-01
Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043
Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.
Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim
2012-10-22
Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.
Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror
2016-10-01
To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.
Yan, Dan; Yang, Yong; Hong, Yingping; Liang, Ting; Yao, Zong; Chen, Xiaoyong; Xiong, Jijun
2018-02-10
Low-cost wireless temperature measurement has significant value in the food industry, logistics, agriculture, portable medical equipment, intelligent wireless health monitoring, and many areas in everyday life. A wireless passive temperature sensor based on PCB (Printed Circuit Board) materials is reported in this paper. The advantages of the sensor include simple mechanical structure, convenient processing, low-cost, and easiness in integration. The temperature-sensitive structure of the sensor is a dielectric-loaded resonant cavity, consisting of the PCB substrate. The sensitive structure also integrates a patch antenna for the transmission of temperature signals. The temperature sensing mechanism of the sensor is the dielectric constant of the PCB substrate changes with temperature, which causes the resonant frequency variation of the resonator. Then the temperature can be measured by detecting the changes in the sensor's working frequency. The PCB-based wireless passive temperature sensor prototype is prepared through theoretical design, parameter analysis, software simulation, and experimental testing. The high- and low-temperature sensing performance of the sensor is tested, respectively. The resonant frequency decreases from 2.434 GHz to 2.379 GHz as the temperature increases from -40 °C to 125 °C. The fitting curve proves that the experimental data have good linearity. Three repetitive tests proved that the sensor possess well repeatability. The average sensitivity is 347.45 KHz / ℃ from repetitive measurements conducted three times. This study demonstrates the feasibility of the PCB-based wireless passive sensor, which provides a low-cost temperature sensing solution for everyday life, modern agriculture, thriving intelligent health devices, and so on, and also enriches PCB product lines and applications.
ERIC Educational Resources Information Center
Raty, Hannu; Komulainen, Katri; Skorokhodova, Nina; Kolesnikov, Vadim; Hamalainen, Anna
2011-01-01
The study set out to examine Finnish and Russian children's images of intelligence as contextualized in the systems of the school and gender. Finnish and Russian pupils, aged 11-12 years, were asked to draw pictures of an intelligent and an ordinary pupil and a good and an ordinary pupil. A distinctive feature shared by the children in both…
BDM-KAT; Report of Research Results
1990-03-31
relations, constraints TASK PRC>CESS MODEL TASK MICRO FOR SENSOR DATA Figure 4. Computer Network for the Intelligent Control of the HIP Process...prototyped and used in preliminary knowledge acquisition for an intelligent process controller for Hot Isostatic Pressing (HIP). Both the volume of...information collected and structured and Lhe value of that knowledge for the developing controller attest to the value of the concepts implemented in BDM
2011-03-01
past few years, including performance evaluation of emergency response robots , sensor systems on unmanned ground vehicles, speech-to-speech translation...emergency response robots ; intelligent systems; mixed palletizing, testing, simulation; robotic vehicle perception systems; search and rescue robots ...ranging from autonomous vehicles to urban search and rescue robots to speech translation and manufacturing systems. The evaluations have occurred in
Learning for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Hall, Ernest L.; Liao, Xiaoqun; Alhaj Ali, Souma M.
2003-10-01
Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots. During the past 20 years, the use of intelligent industrial robots that are equipped not only with motion control systems but also with sensors such as cameras, laser scanners, or tactile sensors that permit adaptation to a changing environment has increased dramatically. However, relatively little has been done concerning learning. Adaptive and robust control permits one to achieve point to point and controlled path operation in a changing environment. This problem can be solved with a learning control. In the unstructured environment, the terrain and consequently the load on the robot"s motors are constantly changing. Learning the parameters of a proportional, integral and derivative controller (PID) and artificial neural network provides an adaptive and robust control. Learning may also be used for path following. Simulations that include learning may be conducted to see if a robot can learn its way through a cluttered array of obstacles. If a situation is performed repetitively, then learning can also be used in the actual application. To reach an even higher degree of autonomous operation, a new level of learning is required. Recently learning theories such as the adaptive critic have been proposed. In this type of learning a critic provides a grade to the controller of an action module such as a robot. The creative control process is used that is "beyond the adaptive critic." A mathematical model of the creative control process is presented that illustrates the use for mobile robots. Examples from a variety of intelligent mobile robot applications are also presented. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots that could lead to many applications.
Real-Time Wireless Data Acquisition System
NASA Technical Reports Server (NTRS)
Valencia, Emilio J.; Perotti, Jose; Lucena, Angel; Mata, Carlos
2007-01-01
Current and future aerospace requirements demand the creation of a new breed of sensing devices, with emphasis on reduced weight, power consumption, and physical size. This new generation of sensors must possess a high degree of intelligence to provide critical data efficiently and in real-time. Intelligence will include self-calibration, self-health assessment, and pre-processing of raw data at the sensor level. Most of these features are already incorporated in the Wireless Sensors Network (SensorNet(TradeMark)), developed by the Instrumentation Group at Kennedy Space Center (KSC). A system based on the SensorNet(TradeMark) architecture consists of data collection point(s) called Central Stations (CS) and intelligent sensors called Remote Stations (RS) where one or more CSs can be accommodated depending on the specific application. The CS's major function is to establish communications with the Remote Stations and to poll each RS for data and health information. The CS also collects, stores and distributes these data to the appropriate systems requiring the information. The system has the ability to perform point-to-point, multi-point and relay mode communications with an autonomous self-diagnosis of each communications link. Upon detection of a communication failure, the system automatically reconfigures to establish new communication paths. These communication paths are automatically and autonomously selected as the best paths by the system based on the existing operating environment. The data acquisition system currently under development at KSC consists of the SensorNet(TradeMark) wireless sensors as the remote stations and the central station called the Radio Frequency Health Node (RFHN). The RFF1N is the central station which remotely communicates with the SensorNet(TradeMark) sensors to control them and to receive data. The system's salient feature is the ability to provide deterministic sensor data with accurate time stamps for both time critical and non-time critical applications. Current wireless standards such as Zigbee(TradeMark) and Bluetooth(Registered TradeMark) do not have these capabilities and can not meet the needs that are provided by the SensorNet technology. Additionally, the system has the ability to automatically reconfigure the wireless communication link to a secondary frequency if interference is encountered and can autonomously search for a sensor that was perceived to be lost using the relay capabilities of the sensors and the secondary frequency. The RFHN and the SensorNet designs are based on modular architectures that allow for future increases in capability and the ability to expand or upgrade with relative ease. The RFHN and SensorNet sensors .can also perform data processing which forms a distributed processing architecture allowing the system to pass along information rather than just sending "raw data points" to the next higher level system. With a relatively small size, weight and power consumption, this system has the potential for both spacecraft and aircraft applications as well as ground applications that require time critical data.
NASA Astrophysics Data System (ADS)
Phipps, Marja; Lewis, Gina
2012-06-01
Over the last decade, intelligence capabilities within the Department of Defense/Intelligence Community (DoD/IC) have evolved from ad hoc, single source, just-in-time, analog processing; to multi source, digitally integrated, real-time analytics; to multi-INT, predictive Processing, Exploitation and Dissemination (PED). Full Motion Video (FMV) technology and motion imagery tradecraft advancements have greatly contributed to Intelligence, Surveillance and Reconnaissance (ISR) capabilities during this timeframe. Imagery analysts have exploited events, missions and high value targets, generating and disseminating critical intelligence reports within seconds of occurrence across operationally significant PED cells. Now, we go beyond FMV, enabling All-Source Analysts to effectively deliver ISR information in a multi-INT sensor rich environment. In this paper, we explore the operational benefits and technical challenges of an Activity Based Intelligence (ABI) approach to FMV PED. Existing and emerging ABI features within FMV PED frameworks are discussed, to include refined motion imagery tools, additional intelligence sources, activity relevant content management techniques and automated analytics.
Wireless spread-spectrum telesensor chip with synchronous digital architecture
Smith, Stephen F.; Turner, Gary W.; Wintenberg, Alan L.; Emery, Michael Steven
2005-03-08
A fully integrated wireless spread-spectrum sensor incorporating all elements of an "intelligent" sensor on a single circuit chip is capable of telemetering data to a receiver. Synchronous control of all elements of the chip provides low-cost, low-noise, and highly robust data transmission, in turn enabling the use of low-cost monolithic receivers.
Application of Kalman filters to robot calibration
NASA Technical Reports Server (NTRS)
Whitney, D. E.; Junkel, E. F.
1983-01-01
This report explores new uses of Kalman filter theory in manufacturing systems (robots in particular). The Kalman filter allows the robot to read its sensors plus external sensors and learn from its experience. In effect, the robot is given primitive intelligence. The study, which is applicable to any type of powered kinematic linkage, focuses on the calibration of a manipulator.
Lightweight Sensor Authentication Scheme for Energy Efficiency in Ubiquitous Computing Environments.
Lee, Jaeseung; Sung, Yunsick; Park, Jong Hyuk
2016-12-01
The Internet of Things (IoT) is the intelligent technologies and services that mutually communicate information between humans and devices or between Internet-based devices. In IoT environments, various device information is collected from the user for intelligent technologies and services that control the devices. Recently, wireless sensor networks based on IoT environments are being used in sectors as diverse as medicine, the military, and commerce. Specifically, sensor techniques that collect relevant area data via mini-sensors after distributing smart dust in inaccessible areas like forests or military zones have been embraced as the future of information technology. IoT environments that utilize smart dust are composed of the sensor nodes that detect data using wireless sensors and transmit the detected data to middle nodes. Currently, since the sensors used in these environments are composed of mini-hardware, they have limited memory, processing power, and energy, and a variety of research that aims to make the best use of these limited resources is progressing. This paper proposes a method to utilize these resources while considering energy efficiency, and suggests lightweight mutual verification and key exchange methods based on a hash function that has no restrictions on operation quantity, velocity, and storage space. This study verifies the security and energy efficiency of this method through security analysis and function evaluation, comparing with existing approaches. The proposed method has great value in its applicability as a lightweight security technology for IoT environments.
Lightweight Sensor Authentication Scheme for Energy Efficiency in Ubiquitous Computing Environments
Lee, Jaeseung; Sung, Yunsick; Park, Jong Hyuk
2016-01-01
The Internet of Things (IoT) is the intelligent technologies and services that mutually communicate information between humans and devices or between Internet-based devices. In IoT environments, various device information is collected from the user for intelligent technologies and services that control the devices. Recently, wireless sensor networks based on IoT environments are being used in sectors as diverse as medicine, the military, and commerce. Specifically, sensor techniques that collect relevant area data via mini-sensors after distributing smart dust in inaccessible areas like forests or military zones have been embraced as the future of information technology. IoT environments that utilize smart dust are composed of the sensor nodes that detect data using wireless sensors and transmit the detected data to middle nodes. Currently, since the sensors used in these environments are composed of mini-hardware, they have limited memory, processing power, and energy, and a variety of research that aims to make the best use of these limited resources is progressing. This paper proposes a method to utilize these resources while considering energy efficiency, and suggests lightweight mutual verification and key exchange methods based on a hash function that has no restrictions on operation quantity, velocity, and storage space. This study verifies the security and energy efficiency of this method through security analysis and function evaluation, comparing with existing approaches. The proposed method has great value in its applicability as a lightweight security technology for IoT environments. PMID:27916962
The design of liquid drip speed monitoring device system based on MCU
NASA Astrophysics Data System (ADS)
Zheng, Shiyong; Li, Zhao; Li, Biqing
2017-08-01
This page proposed an intelligent transfusion control and monitoring system which designed by using AT89S52 micro controller as the core, using the keyboard and photoelectric sensor as the input module, digital tube and motor as the output module. The keyboard is independent and photoelectric sensor can offer reliable detection for liquid drop speed and the transfusion bottle page. When the liquid amount is less than the warning value, the system sounded the alarm, you can remove the alert by hand movement. With the advantages of speed controllable and input pulse power can be maintained of the motor, the system can control the bottle through the upper and lower slow-moving liquid drip to control the speed of intelligent purpose.
NASA Astrophysics Data System (ADS)
Luber, David R.; Marion, John E.; Fields, David
2012-05-01
Logos Technologies has developed and fielded the Kestrel system, an aerostat-based, wide area persistent surveillance system dedicated to force protection and ISR mission execution operating over forward operating bases. Its development included novel imaging and stabilization capability for day/night operations on military aerostat systems. The Kestrel system's contribution is a substantial enhancement to aerostat-based, force protection systems which to date have relied on narrow field of view ball gimbal sensors to identify targets of interest. This inefficient mechanism to conduct wide area field of view surveillance is greatly enhanced by Kestrel's ability to maintain a constant motion imagery stare of the entire forward operating base (FOB) area. The Kestrel airborne sensor enables 360° coverage out to extended ranges which covers a city sized area at moderate resolution, while cueing a narrow field of view sensor to provide high resolution imagery of targets of interest. The ground station exploitation system enables operators to autonomously monitor multiple regions of interest in real time, and allows for backtracking through the recorded imagery, while continuing to monitor ongoing activity. Backtracking capability allows operators to detect threat networks, their CONOPS, and locations of interest. Kestrel's unique advancement has already been utilized successfully in OEF operations.
Intelligent Chemical Sensor Systems for In-space Safety Applications
NASA Technical Reports Server (NTRS)
Hunter, G. W.; Xu, J. C.; Neudeck, P. G.; Makel, D. B.; Ward, B.; Liu, C. C.
2006-01-01
Future in-space and lunar operations will require significantly improved monitoring and Integrated System Health Management (ISHM) throughout the mission. In particular, the monitoring of chemical species is an important component of an overall monitoring system for space vehicles and operations. For example, in leak monitoring of propulsion systems during launch, inspace, and on lunar surfaces, detection of low concentrations of hydrogen and other fuels is important to avoid explosive conditions that could harm personnel and damage the vehicle. Dependable vehicle operation also depends on the timely and accurate measurement of these leaks. Thus, the development of a sensor array to determine the concentration of fuels such as hydrogen, hydrocarbons, or hydrazine as well as oxygen is necessary. Work has been on-going to develop an integrated smart leak detection system based on miniaturized sensors to detect hydrogen, hydrocarbons, or hydrazine, and oxygen. The approach is to implement Microelectromechanical Systems (MEMS) based sensors incorporated with signal conditioning electronics, power, data storage, and telemetry enabling intelligent systems. The final sensor system will be self-contained with a surface area comparable to a postage stamp. This paper discusses the development of this "Lick and Stick" leak detection system and it s application to In-Space Transportation and other Exploration applications.
Sensor Systems for Vehicle Environment Perception in a Highway Intelligent Space System
Tang, Xiaofeng; Gao, Feng; Xu, Guoyan; Ding, Nenggen; Cai, Yao; Ma, Mingming; Liu, Jianxing
2014-01-01
A Highway Intelligent Space System (HISS) is proposed to study vehicle environment perception in this paper. The nature of HISS is that a space sensors system using laser, ultrasonic or radar sensors are installed in a highway environment and communication technology is used to realize the information exchange between the HISS server and vehicles, which provides vehicles with the surrounding road information. Considering the high-speed feature of vehicles on highways, when vehicles will be passing a road ahead that is prone to accidents, the vehicle driving state should be predicted to ensure drivers have road environment perception information in advance, thereby ensuring vehicle driving safety and stability. In order to verify the accuracy and feasibility of the HISS, a traditional vehicle-mounted sensor system for environment perception is used to obtain the relative driving state. Furthermore, an inter-vehicle dynamics model is built and model predictive control approach is used to predict the driving state in the following period. Finally, the simulation results shows that using the HISS for environment perception can arrive at the same results detected by a traditional vehicle-mounted sensors system. Meanwhile, we can further draw the conclusion that using HISS to realize vehicle environment perception can ensure system stability, thereby demonstrating the method's feasibility. PMID:24834907
Wearable medical devices using textile and flexible technologies for ambulatory monitoring.
Dittmar, Andre; Meffre, Richard; De Oliveira, Fabrice; Gehin, Claudine; Delhomme, Georges
2005-01-01
Health smart clothes are in contact with almost all the surface of the skin offer large possibilities for the location of sensors for non invasive measurements. Head band, collar, tee-shirt, socks, shoes, belts for chest, arm, wrist, legs ... provide localization with specific purpose taking into account their proximity of an organ or a source of biosignal, and also its ergonomic possibility (user friendly) to fix a sensor, and the associated instrumentations (batteries, amplifiers, signal processing, telecom, alarm, display ...). Progress in science and technology offers, for the first time, intelligence, speed, miniaturization, sophistication and new materials at low cost. In this new landscape, microtechnologies, information technologies and telecommunications are a key factor. Microsensors : Microtechnologies offer the possibility of small size, but also intelligent, active device, working with low energy, wireless and non invasive or mini invasive. These sensors have to be thin, flexible and compatible with textile, or made using textile technologies, new fibers with specific properties: mechanical, electrical, optical ... The field of applications is very large, e.g. continuous monitoring on elderly population, professional and military activities, athlete's performance and condition, and people with disabilities. The research are oriented toward two complementary directions: Improving the relevancy of each sensor and increasing the number of sensors for having a more global synthetic and robust information.
Noise-immune multisensor transduction of speech
NASA Astrophysics Data System (ADS)
Viswanathan, Vishu R.; Henry, Claudia M.; Derr, Alan G.; Roucos, Salim; Schwartz, Richard M.
1986-08-01
Two types of configurations of multiple sensors were developed, tested and evaluated in speech recognition application for robust performance in high levels of acoustic background noise: One type combines the individual sensor signals to provide a single speech signal input, and the other provides several parallel inputs. For single-input systems, several configurations of multiple sensors were developed and tested. Results from formal speech intelligibility and quality tests in simulated fighter aircraft cockpit noise show that each of the two-sensor configurations tested outperforms the constituent individual sensors in high noise. Also presented are results comparing the performance of two-sensor configurations and individual sensors in speaker-dependent, isolated-word speech recognition tests performed using a commercial recognizer (Verbex 4000) in simulated fighter aircraft cockpit noise.
Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S
2016-09-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Communications and Intelligent Systems Division Overview
NASA Technical Reports Server (NTRS)
Emerson, Dawn
2017-01-01
Provides expertise, and plans, conducts and directs research and engineering development in the competency fields of advanced communications and intelligent systems technologies for applications in current and future aeronautics and space systems.Advances communication systems engineering, development and analysis needed for Glenn Research Center's leadership in communications and intelligent systems technology. Focus areas include advanced high frequency devices, components, and antennas; optical communications, health monitoring and instrumentation; digital signal processing for communications and navigation, and cognitive radios; network architectures, protocols, standards and network-based applications; intelligent controls, dynamics and diagnostics; and smart micro- and nano-sensors and harsh environment electronics. Research and discipline engineering allow for the creation of innovative concepts and designs for aerospace communication systems with reduced size and weight, increased functionality and intelligence. Performs proof-of-concept studies and analyses to assess the impact of the new technologies.
HyperForest: A high performance multi-processor architecture for real-time intelligent systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.
1997-04-01
Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doorsmore » for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.« less
Terra Harvest Open Source Environment (THOSE): a universal unattended ground sensor controller
NASA Astrophysics Data System (ADS)
Gold, Joshua; Klawon, Kevin; Humeniuk, David; Landoll, Darren
2011-06-01
Under the Terra Harvest Program, the Defense Intelligence Agency (DIA) has the objective of developing a universal Controller for the Unattended Ground Sensor (UGS) community. The mission is to define, implement, and thoroughly document an open architecture that universally supports UGS missions, integrating disparate systems, peripherals, etc. The Controller's inherent interoperability with numerous systems enables the integration of both legacy and future Unattended Ground Sensor System (UGSS) components, while the design's open architecture supports rapid third-party development to ensure operational readiness. The successful accomplishment of these objectives by the program's Phase 3b contractors is demonstrated via integration of the companies' respective plug-'n-play contributions that include various peripherals, such as sensors, cameras, etc., and their associated software drivers. In order to independently validate the Terra Harvest architecture, L-3 Nova Engineering, along with its partner, the University of Dayton Research Institute (UDRI), is developing the Terra Harvest Open Source Environment (THOSE), a Java based system running on an embedded Linux Operating System (OS). The Use Cases on which the software is developed support the full range of UGS operational scenarios such as remote sensor triggering, image capture, and data exfiltration. The Team is additionally developing an ARM microprocessor evaluation platform that is both energyefficient and operationally flexible. The paper describes the overall THOSE architecture, as well as the implementation strategy for some of the key software components. Preliminary integration/test results and the Team's approach for transitioning the THOSE design and source code to the Government are also presented.
An application of artificial intelligence theory to reconfigurable flight control
NASA Technical Reports Server (NTRS)
Handelman, David A.
1987-01-01
Artificial intelligence techniques were used along with statistical hpyothesis testing and modern control theory, to help the pilot cope with the issues of information, knowledge, and capability in the event of a failure. An intelligent flight control system is being developed which utilizes knowledge of cause and effect relationships between all aircraft components. It will screen the information available to the pilots, supplement his knowledge, and most importantly, utilize the remaining flight capability of the aircraft following a failure. The list of failure types the control system will accommodate includes sensor failures, actuator failures, and structural failures.
Vehicle-based vision sensors for intelligent highway systems
NASA Astrophysics Data System (ADS)
Masaki, Ichiro
1989-09-01
This paper describes a vision system, based on ASIC (Application Specific Integrated Circuit) approach, for vehicle guidance on highways. After reviewing related work in the fields of intelligent vehicles, stereo vision, and ASIC-based approaches, the paper focuses on a stereo vision system for intelligent cruise control. The system measures the distance to the vehicle in front using trinocular triangulation. An application specific processor architecture was developed to offer low mass-production cost, real-time operation, low power consumption, and small physical size. The system was installed in the trunk of a car and evaluated successfully on highways.
On-road vehicle detection: a review.
Sun, Zehang; Bebis, George; Miller, Ronald
2006-05-01
Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research.
Animation graphic interface for the space shuttle onboard computer
NASA Technical Reports Server (NTRS)
Wike, Jeffrey; Griffith, Paul
1989-01-01
Graphics interfaces designed to operate on space qualified hardware challenge software designers to display complex information under processing power and physical size constraints. Under contract to Johnson Space Center, MICROEXPERT Systems is currently constructing an intelligent interface for the LASER DOCKING SENSOR (LDS) flight experiment. Part of this interface is a graphic animation display for Rendezvous and Proximity Operations. The displays have been designed in consultation with Shuttle astronauts. The displays show multiple views of a satellite relative to the shuttle, coupled with numeric attitude information. The graphics are generated using position data received by the Shuttle Payload and General Support Computer (PGSC) from the Laser Docking Sensor. Some of the design considerations include crew member preferences in graphic data representation, single versus multiple window displays, mission tailoring of graphic displays, realistic 3D images versus generic icon representations of real objects, the physical relationship of the observers to the graphic display, how numeric or textual information should interface with graphic data, in what frame of reference objects should be portrayed, recognizing conditions of display information-overload, and screen format and placement consistency.
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle
Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro
2018-01-01
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310
Radar sensors for intersection collision avoidance
NASA Astrophysics Data System (ADS)
Jocoy, Edward H.; Phoel, Wayne G.
1997-02-01
On-vehicle sensors for collision avoidance and intelligent cruise control are receiving considerably attention as part of Intelligent Transportation Systems. Most of these sensors are radars and `look' in the direction of the vehicle's headway, that is, in the direction ahead of the vehicle. Calspan SRL Corporation is investigating the use of on- vehicle radar for Intersection Collision Avoidance (ICA). Four crash scenarios are considered and the goal is to design, develop and install a collision warning system in a test vehicle, and conduct both test track and in-traffic experiments. Current efforts include simulations to examine ICA geometry-dependent design parameters and the design of an on-vehicle radar and tracker for threat detection. This paper discusses some of the simulation and radar design efforts. In addition, an available headway radar was modified to scan the wide angles (+/- 90 degree(s)) associated with ICA scenarios. Preliminary proof-of-principal tests are underway as a risk reduction effort. Some initial target detection results are presented.
Context-Aided Sensor Fusion for Enhanced Urban Navigation
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-01-01
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. PMID:23223080
Intelligent transient transitions detection of LRE test bed
NASA Astrophysics Data System (ADS)
Zhu, Fengyu; Shen, Zhengguang; Wang, Qi
2013-01-01
Health Monitoring Systems is an implementation of monitoring strategies for complex systems whereby avoiding catastrophic failure, extending life and leading to improved asset management. A Health Monitoring Systems generally encompasses intelligence at many levels and sub-systems including sensors, actuators, devices, etc. In this paper, a smart sensor is studied, which is use to detect transient transitions of liquid-propellant rocket engines test bed. In consideration of dramatic changes of variable condition, wavelet decomposition is used to work real time in areas. Contrast to traditional Fourier transform method, the major advantage of adding wavelet analysis is the ability to detect transient transitions as well as obtaining the frequency content using a much smaller data set. Historically, transient transitions were only detected by offline analysis of the data. The methods proposed in this paper provide an opportunity to detect transient transitions automatically as well as many additional data anomalies, and provide improved data-correction and sensor health diagnostic abilities. The developed algorithms have been tested on actual rocket test data.
Visual tracking strategies for intelligent vehicle highway systems
NASA Astrophysics Data System (ADS)
Smith, Christopher E.; Papanikolopoulos, Nikolaos P.; Brandt, Scott A.; Richards, Charles
1995-01-01
The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved. These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations. Of the sensors available, vision sensors provide information that is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system. In this paper we present robust techniques for intelligent vehicle-highway applications where computer vision plays a crucial role. In particular, we demonstrate that the controlled active vision framework can be utilized to provide a visual sensing modality to a traffic advisory system in order to increase the overall safety margin in a variety of common traffic situations. We have selected two application examples, vehicle tracking and pedestrian tracking, to demonstrate that the framework can provide precisely the type of information required to effectively manage the given situation.
Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.
Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier
2018-01-02
Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.
Feature Selection for Wheat Yield Prediction
NASA Astrophysics Data System (ADS)
Ruß, Georg; Kruse, Rudolf
Carrying out effective and sustainable agriculture has become an important issue in recent years. Agricultural production has to keep up with an everincreasing population by taking advantage of a field’s heterogeneity. Nowadays, modern technology such as the global positioning system (GPS) and a multitude of developed sensors enable farmers to better measure their fields’ heterogeneities. For this small-scale, precise treatment the term precision agriculture has been coined. However, the large amounts of data that are (literally) harvested during the growing season have to be analysed. In particular, the farmer is interested in knowing whether a newly developed heterogeneity sensor is potentially advantageous or not. Since the sensor data are readily available, this issue should be seen from an artificial intelligence perspective. There it can be treated as a feature selection problem. The additional task of yield prediction can be treated as a multi-dimensional regression problem. This article aims to present an approach towards solving these two practically important problems using artificial intelligence and data mining ideas and methodologies.
Context-aided sensor fusion for enhanced urban navigation.
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-12-06
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments.
Miss-distance indicator for tank main guns
NASA Astrophysics Data System (ADS)
Bornstein, Jonathan A.; Hillis, David B.
1996-06-01
Tank main gun systems must possess extremely high levels of accuracy to perform successfully in battle. Under some circumstances, the first round fired in an engagement may miss the intended target, and it becomes necessary to rapidly correct fire. A breadboard automatic miss-distance indicator system was previously developed to assist in this process. The system, which would be mounted on a 'wingman' tank, consists of a charged-coupled device (CCD) camera and computer-based image-processing system, coupled with a separate infrared sensor to detect muzzle flash. For the system to be successfully employed with current generation tanks, it must be reliable, be relatively low cost, and respond rapidly maintaining current firing rates. Recently, the original indicator system was developed further in an effort to assist in achieving these goals. Efforts have focused primarily upon enhanced image-processing algorithms, both to improve system reliability and to reduce processing requirements. Intelligent application of newly refined trajectory models has permitted examination of reduced areas of interest and enhanced rejection of false alarms, significantly improving system performance.
Color and Contour Based Identification of Stem of Coconut Bunch
NASA Astrophysics Data System (ADS)
Kannan Megalingam, Rajesh; Manoharan, Sakthiprasad K.; Reddy, Rajesh G.; Sriteja, Gone; Kashyap, Ashwin
2017-08-01
Vision is the key component of Artificial Intelligence and Automated Robotics. Sensors or Cameras are the sight organs for a robot. Only through this, they are able to locate themselves or identify the shape of a regular or an irregular object. This paper presents the method of Identification of an object based on color and contour recognition using a camera through digital image processing techniques for robotic applications. In order to identify the contour, shape matching technique is used, which takes the input data from the database provided, and uses it to identify the contour by checking for shape match. The shape match is based on the idea of iterating through each contour of the threshold image. The color is identified on HSV Scale, by approximating the desired range of values from the database. HSV data along with iteration is used for identifying a quadrilateral, which is our required contour. This algorithm could also be used in a non-deterministic plane, which only uses HSV values exclusively.
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
Miniature Intelligent Sensor Module
NASA Technical Reports Server (NTRS)
Beech, Russell S.
2007-01-01
An electronic unit denoted the Miniature Intelligent Sensor Module performs sensor-signal-conditioning functions and local processing of sensor data. The unit includes four channels of analog input/output circuitry, a processor, volatile and nonvolatile memory, and two Ethernet communication ports, all housed in a weathertight enclosure. The unit accepts AC or DC power. The analog inputs provide programmable gain, offset, and filtering as well as shunt calibration and auto-zeroing. Analog outputs include sine, square, and triangular waves having programmable frequencies and amplitudes, as well as programmable amplitude DC. One innovative aspect of the design of this unit is the integration of a relatively powerful processor and large amount of memory along with the sensor-signalconditioning circuitry so that sophisticated computer programs can be used to acquire and analyze sensor data and estimate and track the health of the overall sensor-data-acquisition system of which the unit is a part. The unit includes calibration, zeroing, and signalfeedback circuitry to facilitate health monitoring. The processor is also integrated with programmable logic circuitry in such a manner as to simplify and enhance acquisition of data and generation of analog outputs. A notable unique feature of the unit is a cold-junction compensation circuit in the back shell of a sensor connector. This circuit makes it possible to use Ktype thermocouples without compromising a housing seal. Replicas of this unit may prove useful in industrial and manufacturing settings - especially in such large outdoor facilities as refineries. Two features can be expected to simplify installation: the weathertight housings should make it possible to mount the units near sensors, and the Ethernet communication capability of the units should facilitate establishment of communication connections for the units.
Yan, Dan; Yang, Yong; Hong, Yingping; Liang, Ting; Yao, Zong; Chen, Xiaoyong; Xiong, Jijun
2018-01-01
Low-cost wireless temperature measurement has significant value in the food industry, logistics, agriculture, portable medical equipment, intelligent wireless health monitoring, and many areas in everyday life. A wireless passive temperature sensor based on PCB (Printed Circuit Board) materials is reported in this paper. The advantages of the sensor include simple mechanical structure, convenient processing, low-cost, and easiness in integration. The temperature-sensitive structure of the sensor is a dielectric-loaded resonant cavity, consisting of the PCB substrate. The sensitive structure also integrates a patch antenna for the transmission of temperature signals. The temperature sensing mechanism of the sensor is the dielectric constant of the PCB substrate changes with temperature, which causes the resonant frequency variation of the resonator. Then the temperature can be measured by detecting the changes in the sensor’s working frequency. The PCB-based wireless passive temperature sensor prototype is prepared through theoretical design, parameter analysis, software simulation, and experimental testing. The high- and low-temperature sensing performance of the sensor is tested, respectively. The resonant frequency decreases from 2.434 GHz to 2.379 GHz as the temperature increases from −40 °C to 125 °C. The fitting curve proves that the experimental data have good linearity. Three repetitive tests proved that the sensor possess well repeatability. The average sensitivity is 347.45 KHz/°C℃ from repetitive measurements conducted three times. This study demonstrates the feasibility of the PCB-based wireless passive sensor, which provides a low-cost temperature sensing solution for everyday life, modern agriculture, thriving intelligent health devices, and so on, and also enriches PCB product lines and applications. PMID:29439393