Sample records for image sensor capable

  1. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  2. Smart image sensors: an emerging key technology for advanced optical measurement and microsystems

    NASA Astrophysics Data System (ADS)

    Seitz, Peter

    1996-08-01

    Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.

  3. Ultra-sensitive fluorescent imaging-biosensing using biological photonic crystals

    NASA Astrophysics Data System (ADS)

    Squire, Kenny; Kong, Xianming; Wu, Bo; Rorrer, Gregory; Wang, Alan X.

    2018-02-01

    Optical biosensing is a growing area of research known for its low limits of detection. Among optical sensing techniques, fluorescence detection is among the most established and prevalent. Fluorescence imaging is an optical biosensing modality that exploits the sensitivity of fluorescence in an easy-to-use process. Fluorescence imaging allows a user to place a sample on a sensor and use an imager, such as a camera, to collect the results. The image can then be processed to determine the presence of the analyte. Fluorescence imaging is appealing because it can be performed with as little as a light source, a camera and a data processor thus being ideal for nontrained personnel without any expensive equipment. Fluorescence imaging sensors generally employ an immunoassay procedure to selectively trap analytes such as antigens or antibodies. When the analyte is present, the sensor fluoresces thus transducing the chemical reaction into an optical signal capable of imaging. Enhancement of this fluorescence leads to an enhancement in the detection capabilities of the sensor. Diatoms are unicellular algae with a biosilica shell called a frustule. The frustule is porous with periodic nanopores making them biological photonic crystals. Additionally, the porous nature of the frustule allows for large surface area capable of multiple analyte binding sites. In this paper, we fabricate a diatom based ultra-sensitive fluorescence imaging biosensor capable of detecting the antibody mouse immunoglobulin down to a concentration of 1 nM. The measured signal has an enhancement of 6× when compared to sensors fabricated without diatoms.

  4. An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.

    PubMed

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U

    2015-03-06

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.

  5. An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability

    PubMed Central

    Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U.

    2015-01-01

    An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle. PMID:25756863

  6. Magnetic resonance imaging-compatible tactile sensing device based on a piezoelectric array.

    PubMed

    Hamed, Abbi; Masamune, Ken; Tse, Zion Tsz Ho; Lamperth, Michael; Dohi, Takeyoshi

    2012-07-01

    Minimally invasive surgery is a widely used medical technique, one of the drawbacks of which is the loss of direct sense of touch during the operation. Palpation is the use of fingertips to explore and make fast assessments of tissue morphology. Although technologies are developed to equip minimally invasive surgery tools with haptic feedback capabilities, the majority focus on tissue stiffness profiling and tool-tissue interaction force measurement. For greatly increased diagnostic capability, a magnetic resonance imaging-compatible tactile sensor design is proposed, which allows minimally invasive surgery to be performed under image guidance, combining the strong capability of magnetic resonance imaging soft tissue and intuitive palpation. The sensing unit is based on a piezoelectric sensor methodology, which conforms to the stringent mechanical and electrical design requirements imposed by the magnetic resonance environment The sensor mechanical design and the device integration to a 0.2 Tesla open magnetic resonance imaging scanner are described, together with the device's magnetic resonance compatibility testing. Its design limitations and potential future improvements are also discussed. A tactile sensing unit based on a piezoelectric sensor principle is proposed, which is designed for magnetic resonance imaging guided interventions.

  7. Imaging optical sensor arrays.

    PubMed

    Walt, David R

    2002-10-01

    Imaging optical fibres have been etched to prepare microwell arrays. These microwells have been loaded with sensing materials such as bead-based sensors and living cells to create high-density sensor arrays. The extremely small sizes and volumes of the wells enable high sensitivity and high information content sensing capabilities.

  8. CMOS image sensor with organic photoconductive layer having narrow absorption band and proposal of stack type solid-state image sensors

    NASA Astrophysics Data System (ADS)

    Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi

    2006-02-01

    Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.

  9. Tracking and imaging humans on heterogeneous infrared sensor arrays for law enforcement applications

    NASA Astrophysics Data System (ADS)

    Feller, Steven D.; Zheng, Y.; Cull, Evan; Brady, David J.

    2002-08-01

    We present a plan for the integration of geometric constraints in the source, sensor and analysis levels of sensor networks. The goal of geometric analysis is to reduce the dimensionality and complexity of distributed sensor data analysis so as to achieve real-time recognition and response to significant events. Application scenarios include biometric tracking of individuals, counting and analysis of individuals in groups of humans and distributed sentient environments. We are particularly interested in using this approach to provide networks of low cost point detectors, such as infrared motion detectors, with complex imaging capabilities. By extending the capabilities of simple sensors, we expect to reduce the cost of perimeter and site security applications.

  10. An airborne thematic thermal infrared and electro-optical imaging system

    NASA Astrophysics Data System (ADS)

    Sun, Xiuhong; Shu, Peter

    2011-08-01

    This paper describes an advanced Airborne Thematic Thermal InfraRed and Electro-Optical Imaging System (ATTIREOIS) and its potential applications. ATTIREOIS sensor payload consists of two sets of advanced Focal Plane Arrays (FPAs) - a broadband Thermal InfraRed Sensor (TIRS) and a four (4) band Multispectral Electro-Optical Sensor (MEOS) to approximate Landsat ETM+ bands 1,2,3,4, and 6, and LDCM bands 2,3,4,5, and 10+11. The airborne TIRS is 3-axis stabilized payload capable of providing 3D photogrammetric images with a 1,850 pixel swathwidth via pushbroom operation. MEOS has a total of 116 million simultaneous sensor counts capable of providing 3 cm spatial resolution multispectral orthophotos for continuous airborne mapping. ATTIREOIS is a complete standalone and easy-to-use portable imaging instrument for light aerial vehicle deployment. Its miniaturized backend data system operates all ATTIREOIS imaging sensor components, an INS/GPS, and an e-Gimbal™ Control Electronic Unit (ECU) with a data throughput of 300 Megabytes/sec. The backend provides advanced onboard processing, performing autonomous raw sensor imagery development, TIRS image track-recovery reconstruction, LWIR/VNIR multi-band co-registration, and photogrammetric image processing. With geometric optics and boresight calibrations, the ATTIREOIS data products are directly georeferenced with an accuracy of approximately one meter. A prototype ATTIREOIS has been configured. Its sample LWIR/EO image data will be presented. Potential applications of ATTIREOIS include: 1) Providing timely and cost-effective, precisely and directly georeferenced surface emissive and solar reflective LWIR/VNIR multispectral images via a private Google Earth Globe to enhance NASA's Earth science research capabilities; and 2) Underflight satellites to support satellite measurement calibration and validation observations.

  11. Reduced signal crosstalk multi neurotransmitter image sensor by microhole array structure

    NASA Astrophysics Data System (ADS)

    Ogaeri, Yuta; Lee, You-Na; Mitsudome, Masato; Iwata, Tatsuya; Takahashi, Kazuhiro; Sawada, Kazuaki

    2018-06-01

    A microhole array structure combined with an enzyme immobilization method using magnetic beads can enhance the target discernment capability of a multi neurotransmitter image sensor. Here we report the fabrication and evaluation of the H+-diffusion-preventing capability of the sensor with the array structure. The structure with an SU-8 photoresist has holes with a size of 24.5 × 31.6 µm2. Sensors were prepared with the array structure of three different heights: 0, 15, and 60 µm. When the sensor has the structure of 60 µm height, 48% reduced output voltage is measured at a H+-sensitive null pixel that is located 75 µm from the acetylcholinesterase (AChE)-immobilized pixel, which is the starting point of H+ diffusion. The suppressed H+ immigration is shown in a two-dimensional (2D) image in real time. The sensor parameters, such as height of the array structure and measuring time, are optimized experimentally. The sensor is expected to effectively distinguish various neurotransmitters in biological samples.

  12. Satellite-based Tropical Cyclone Monitoring Capabilities

    NASA Astrophysics Data System (ADS)

    Hawkins, J.; Richardson, K.; Surratt, M.; Yang, S.; Lee, T. F.; Sampson, C. R.; Solbrig, J.; Kuciauskas, A. P.; Miller, S. D.; Kent, J.

    2012-12-01

    Satellite remote sensing capabilities to monitor tropical cyclone (TC) location, structure, and intensity have evolved by utilizing a combination of operational and research and development (R&D) sensors. The microwave imagers from the operational Defense Meteorological Satellite Program [Special Sensor Microwave/Imager (SSM/I) and the Special Sensor Microwave Imager Sounder (SSMIS)] form the "base" for structure observations due to their ability to view through upper-level clouds, modest size swaths and ability to capture most storm structure features. The NASA TRMM microwave imager and precipitation radar continue their 15+ yearlong missions in serving the TC warning and research communities. The cessation of NASA's QuikSCAT satellite after more than a decade of service is sorely missed, but India's OceanSat-2 scatterometer is now providing crucial ocean surface wind vectors in addition to the Navy's WindSat ocean surface wind vector retrievals. Another Advanced Scatterometer (ASCAT) onboard EUMETSAT's MetOp-2 satellite is slated for launch soon. Passive microwave imagery has received a much needed boost with the launch of the French/Indian Megha Tropiques imager in September 2011, basically greatly supplementing the very successful NASA TRMM pathfinder with a larger swath and more frequent temporal sampling. While initial data issues have delayed data utilization, current news indicates this data will be available in 2013. Future NASA Global Precipitation Mission (GPM) sensors starting in 2014 will provide enhanced capabilities. Also, the inclusion of the new microwave sounder data from the NPP ATMS (Oct 2011) will assist in mapping TC convective structures. The National Polar orbiting Partnership (NPP) program's VIIRS sensor includes a day night band (DNB) with the capability to view TC cloud structure at night when sufficient lunar illumination exits. Examples highlighting this new capability will be discussed in concert with additional data fusion efforts.

  13. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12

    DTIC Science & Technology

    2015-09-03

    NPP) with the VIIRS sensor package as well as data from the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and...capability • Prepare the NRT Geostationary Ocean Color Imager (GOCI) data stream for integration into operations. • Improvements in sensor...Navy (DON) Environmental Data Records (EDRs) Expeditionary Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical

  14. Emergency Response Fire-Imaging UAS Missions over the Southern California Wildfire Disaster

    NASA Technical Reports Server (NTRS)

    DelFrate, John H.

    2008-01-01

    Objectives include: Demonstrate capabilities of UAS to overfly and collect sensor data on widespread fires throughout Western US. Demonstrate long-endurance mission capabilities (20-hours+). Image multiple fires (greater than 4 fires per mission), to showcase extendable mission configuration and ability to either linger over key fires or station over disparate regional fires. Demonstrate new UAV-compatible, autonomous sensor for improved thermal characterization of fires. Provide automated, on-board, terrain and geo-rectified sensor imagery over OTH satcom links to national fire personnel and Incident commanders. Deliver real-time imagery (within 10-minutes of acquisition). Demonstrate capabilities of OTS technologies (GoogleEarth) to serve and display mission-critical sensor data, coincident with other pertinent data elements to facilitate information processing (WX data, ground asset data, other satellite data, R/T video, flight track info, etc).

  15. Emergency Response Fire-Imaging UAS Missions over the Southern California Wildfire Disaster

    NASA Technical Reports Server (NTRS)

    Cobleigh, Brent R.

    2007-01-01

    Objectives include: Demonstrate capabilities of UAS to overfly and collect sensor data on widespread fires throughout Western US. Demonstrate long-endurance mission capabilities (20-hours+). Image multiple fires (greater than 4 fires per mission), to showcase extendable mission configuration and ability to either linger over key fires or station over disparate regional fires. Demonstrate new UAV-compatible, autonomous sensor for improved thermal characterization of fires. Provide automated, on-board, terrain and geo-rectified sensor imagery over OTH satcom links to national fire personnel and Incident commanders. Deliver real-time imagery (within 10-minutes of acquisition). Demonstrate capabilities of OTS technologies (GoogleEarth) to serve and display mission-critical sensor data, coincident with other pertinent data elements to facilitate information processing (WX data, ground asset data, other satellite data, R/T video, flight track info, etc).

  16. Event-based Sensing for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Cohen, G.; Afshar, S.; van Schaik, A.; Wabnitz, A.; Bessell, T.; Rutten, M.; Morreale, B.

    A revolutionary type of imaging device, known as a silicon retina or event-based sensor, has recently been developed and is gaining in popularity in the field of artificial vision systems. These devices are inspired by a biological retina and operate in a significantly different way to traditional CCD-based imaging sensors. While a CCD produces frames of pixel intensities, an event-based sensor produces a continuous stream of events, each of which is generated when a pixel detects a change in log light intensity. These pixels operate asynchronously and independently, producing an event-based output with high temporal resolution. There are also no fixed exposure times, allowing these devices to offer a very high dynamic range independently for each pixel. Additionally, these devices offer high-speed, low power operation and a sparse spatiotemporal output. As a consequence, the data from these sensors must be interpreted in a significantly different way to traditional imaging sensors and this paper explores the advantages this technology provides for space imaging. The applicability and capabilities of event-based sensors for SSA applications are demonstrated through telescope field trials. Trial results have confirmed that the devices are capable of observing resident space objects from LEO through to GEO orbital regimes. Significantly, observations of RSOs were made during both day-time and nighttime (terminator) conditions without modification to the camera or optics. The event based sensor’s ability to image stars and satellites during day-time hours offers a dramatic capability increase for terrestrial optical sensors. This paper shows the field testing and validation of two different architectures of event-based imaging sensors. An eventbased sensor’s asynchronous output has an intrinsically low data-rate. In addition to low-bandwidth communications requirements, the low weight, low-power and high-speed make them ideally suitable to meeting the demanding challenges required by space-based SSA systems. Results from these experiments and the systems developed highlight the applicability of event-based sensors to ground and space-based SSA tasks.

  17. Overview of the Shuttle Imaging Radar-B preliminary scientific results

    NASA Technical Reports Server (NTRS)

    Elachi, C.; Cimino, J.; Settle, M.

    1986-01-01

    Data collected with the Shuttle Imaging Radar-B (SIR-B) on the October 5, 1985 Shuttle mission are discussed. The design and capabilities of the sensor which operates in a fixed illumination geometry and has incidence angles between 15 and 60 deg with 1 deg increments are described. Problems encountered with the SIR-B during the mission are examined. the The radar stereo imaging capability of the sensor was verified and three-dimensional images of the earth surface were obtained. The oceanography experiments provided significant data on ocean wave and internal wave patterns, oil spills, and ice zones. The geological images revealed that the sensor can evaluate penetration effect in dry soil from buried receivers and the existence of subsurface dry channels in the Egyptian desert was validated. The use of multiincidence angle imaging to classify terrain units and derive vegetation maps and the development of terrain maps are confirmed.

  18. Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors.

    PubMed

    Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo

    2017-01-22

    The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors' radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors' application, and as such will promote the development of Chinese satellite data.

  19. Distributed Estimation, Coding, and Scheduling in Wireless Visual Sensor Networks

    ERIC Educational Resources Information Center

    Yu, Chao

    2013-01-01

    In this thesis, we consider estimation, coding, and sensor scheduling for energy efficient operation of wireless visual sensor networks (VSN), which consist of battery-powered wireless sensors with sensing (imaging), computation, and communication capabilities. The competing requirements for applications of these wireless sensor networks (WSN)…

  20. Indian Ocean METOC Imager

    DTIC Science & Technology

    2002-09-30

    onr.navy.mil Mr. Wallace Harrison, GIFTS Program Manager NASA EO-3, New Millenium Program, Langley Research Center phone: 757-864-6680 fax: 757-864...Observing 3 Geostationary Imaging Fourier Transform Spectrometer ( GIFTS ) sensor development to provide this advanced capability. The IOMI program will...share costs for the GIFTS sensor development, the spacecraft bus, provide lifetime enhancements to the GIFTS sensor, and 1 Report Documentation Page

  1. A high-speed trapezoid image sensor design for continuous traffic monitoring at signalized intersection approaches.

    DOT National Transportation Integrated Search

    2014-10-01

    The goal of this project is to monitor traffic flow continuously with an innovative camera system composed of a custom : designed image sensor integrated circuit (IC) containing trapezoid pixel array and camera system that is capable of : intelligent...

  2. Passive IR polarization sensors: a new technology for mine detection

    NASA Astrophysics Data System (ADS)

    Barbour, Blair A.; Jones, Michael W.; Barnes, Howard B.; Lewis, Charles P.

    1998-09-01

    The problem of mine and minefield detection continues to provide a significant challenge to sensor systems. Although the various sensor technologies (infrared, ground penetrating radar, etc.) may excel in certain situations there does not exist a single sensor technology that can adequately detect mines in all conditions such as time of day, weather, buried or surface laid, etc. A truly robust mine detection system will likely require the fusion of data from multiple sensor technologies. The performance of these systems, however, will ultimately depend on the performance of the individual sensors. Infrared (IR) polarimetry is a new and innovative sensor technology that adds substantial capabilities to the detection of mines. IR polarimetry improves on basic IR imaging by providing improved spatial resolution of the target, an inherent ability to suppress clutter, and the capability for zero (Delta) T imaging. Nichols Research Corporation (Nichols) is currently evaluating the effectiveness of IR polarization for mine detection. This study is partially funded by the U.S. Army Night Vision & Electronic Sensors Directorate (NVESD). The goal of the study is to demonstrate, through phenomenology studies and limited field trials, that IR polarizaton outperforms conventional IR imaging in the mine detection arena.

  3. Sensor-based architecture for medical imaging workflow analysis.

    PubMed

    Silva, Luís A Bastião; Campos, Samuel; Costa, Carlos; Oliveira, José Luis

    2014-08-01

    The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.

  4. Imaging sensor constellation for tomographic chemical cloud mapping.

    PubMed

    Cosofret, Bogdan R; Konno, Daisei; Faghfouri, Aram; Kindle, Harry S; Gittins, Christopher M; Finson, Michael L; Janov, Tracy E; Levreault, Mark J; Miyashiro, Rex K; Marinelli, William J

    2009-04-01

    A sensor constellation capable of determining the location and detailed concentration distribution of chemical warfare agent simulant clouds has been developed and demonstrated on government test ranges. The constellation is based on the use of standoff passive multispectral infrared imaging sensors to make column density measurements through the chemical cloud from two or more locations around its periphery. A computed tomography inversion method is employed to produce a 3D concentration profile of the cloud from the 2D line density measurements. We discuss the theoretical basis of the approach and present results of recent field experiments where controlled releases of chemical warfare agent simulants were simultaneously viewed by three chemical imaging sensors. Systematic investigations of the algorithm using synthetic data indicate that for complex functions, 3D reconstruction errors are less than 20% even in the case of a limited three-sensor measurement network. Field data results demonstrate the capability of the constellation to determine 3D concentration profiles that account for ~?86%? of the total known mass of material released.

  5. A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi

    1997-01-01

    A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.

  6. Autonomous collection of dynamically-cued multi-sensor imagery

    NASA Astrophysics Data System (ADS)

    Daniel, Brian; Wilson, Michael L.; Edelberg, Jason; Jensen, Mark; Johnson, Troy; Anderson, Scott

    2011-05-01

    The availability of imagery simultaneously collected from sensors of disparate modalities enhances an image analyst's situational awareness and expands the overall detection capability to a larger array of target classes. Dynamic cooperation between sensors is increasingly important for the collection of coincident data from multiple sensors either on the same or on different platforms suitable for UAV deployment. Of particular interest is autonomous collaboration between wide area survey detection, high-resolution inspection, and RF sensors that span large segments of the electromagnetic spectrum. The Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) is building sensors with such networked communications capability and is conducting field tests to demonstrate the feasibility of collaborative sensor data collection and exploitation. Example survey / detection sensors include: NuSAR (NRL Unmanned SAR), a UAV compatible synthetic aperture radar system; microHSI, an NRL developed lightweight hyper-spectral imager; RASAR (Real-time Autonomous SAR), a lightweight podded synthetic aperture radar; and N-WAPSS-16 (Nighttime Wide-Area Persistent Surveillance Sensor-16Mpix), a MWIR large array gimbaled system. From these sensors, detected target cues are automatically sent to the NRL/SDL developed EyePod, a high-resolution, narrow FOV EO/IR sensor, for target inspection. In addition to this cooperative data collection, EyePod's real-time, autonomous target tracking capabilities will be demonstrated. Preliminary results and target analysis will be presented.

  7. Performing data analytics on information obtained from various sensors on an OSUS compliant system

    NASA Astrophysics Data System (ADS)

    Cashion, Kelly; Landoll, Darian; Klawon, Kevin; Powar, Nilesh

    2017-05-01

    The Open Standard for Unattended Sensors (OSUS) was developed by DIA and ARL to provide a plug-n-play platform for sensor interoperability. Our objective is to use the standardized data produced by OSUS in performing data analytics on information obtained from various sensors. Data analytics can be integrated in one of three ways: within an asset itself; as an independent plug-in designed for one type of asset (i.e. camera or seismic sensor); or as an independent plug-in designed to incorporate data from multiple assets. As a proof-of-concept, we develop a model that can be used in the second of these types - an independent component for camera images. The dataset used was collected as part of a demonstration and test of OSUS capabilities. The image data includes images of empty outdoor scenes and scenes with human or vehicle activity. We design, test, and train a convolution neural network (CNN) to analyze these images and assess the presence of activity in the image. The resulting classifier labels input images as empty or activity with 86.93% accuracy, demonstrating the promising opportunities for deep learning, machine learning, and predictive analytics as an extension of OSUS's already robust suite of capabilities.

  8. CubeSat Nighttime Earth Observations

    NASA Astrophysics Data System (ADS)

    Pack, D. W.; Hardy, B. S.; Longcore, T.

    2017-12-01

    Satellite monitoring of visible emissions at night has been established as a useful capability for environmental monitoring and mapping the global human footprint. Pioneering work using Defense Meteorological Support Program (DMSP) sensors has been followed by new work using the more capable Visible Infrared Imaging Radiometer Suite (VIIRS). Beginning in 2014, we have been investigating the ability of small visible light cameras on CubeSats to contribute to nighttime Earth science studies via point-and-stare imaging. This paper summarizes our recent research using a common suite of simple visible cameras on several AeroCube satellites to carry out nighttime observations of urban areas and natural gas flares, nighttime weather (including lighting), and fishing fleet lights. Example results include: urban image examples, the utility of color imagery, urban lighting change detection, and multi-frame sequences imaging nighttime weather and large ocean areas with extensive fishing vessel lights. Our results show the potential for CubeSat sensors to improve monitoring of urban growth, light pollution, energy usage, the urban-wildland interface, the improvement of electrical power grids in developing countries, light-induced fisheries, and oil industry flare activity. In addition to orbital results, the nighttime imaging capabilities of new CubeSat sensors scheduled for launch in October 2017 are discussed.

  9. Bioinspired polarization navigation sensor for autonomous munitions systems

    NASA Astrophysics Data System (ADS)

    Giakos, G. C.; Quang, T.; Farrahi, T.; Deshpande, A.; Narayan, C.; Shrestha, S.; Li, Y.; Agarwal, M.

    2013-05-01

    Small unmanned aerial vehicles UAVs (SUAVs), micro air vehicles (MAVs), Automated Target Recognition (ATR), and munitions guidance, require extreme operational agility and robustness which can be partially offset by efficient bioinspired imaging sensor designs capable to provide enhanced guidance, navigation and control capabilities (GNC). Bioinspired-based imaging technology can be proved useful either for long-distance surveillance of targets in a cluttered environment, or at close distances limited by space surroundings and obstructions. The purpose of this study is to explore the phenomenology of image formation by different insect eye architectures, which would directly benefit the areas of defense and security, on the following four distinct areas: a) fabrication of the bioinspired sensor b) optical architecture, c) topology, and d) artificial intelligence. The outcome of this study indicates that bioinspired imaging can impact the areas of defense and security significantly by dedicated designs fitting into different combat scenarios and applications.

  10. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  11. EOID Evaluation and Automated Target Recognition

    DTIC Science & Technology

    2002-09-30

    Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects (MLOs) that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist

  12. EOID Evaluation and Automated Target Recognition

    DTIC Science & Technology

    2001-09-30

    Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist the

  13. Changing requirements and solutions for unattended ground sensors

    NASA Astrophysics Data System (ADS)

    Prado, Gervasio; Johnson, Robert

    2007-10-01

    Unattended Ground Sensors (UGS) were first used to monitor Viet Cong activity along the Ho Chi Minh Trail in the 1960's. In the 1980's, significant improvement in the capabilities of UGS became possible with the development of digital signal processors; this led to their use as fire control devices for smart munitions (for example: the Wide Area Mine) and later to monitor the movements of mobile missile launchers. In these applications, the targets of interest were large military vehicles with strong acoustic, seismic and magnetic signatures. Currently, the requirements imposed by new terrorist threats and illegal border crossings have changed the emphasis to the monitoring of light vehicles and foot traffic. These new requirements have changed the way UGS are used. To improve performance against targets with lower emissions, sensors are used in multi-modal arrangements. Non-imaging sensors (acoustic, seismic, magnetic and passive infrared) are now being used principally as activity sensors to cue imagers and remote cameras. The availability of better imaging technology has made imagers the preferred source of "actionable intelligence". Infrared cameras are now based on un-cooled detector-arrays that have made their application in UGS possible in terms of their cost and power consumption. Visible light imagers are also more sensitive extending their utility well beyond twilight. The imagers are equipped with sophisticated image processing capabilities (image enhancement, moving target detection and tracking, image compression). Various commercial satellite services now provide relatively inexpensive long-range communications and the Internet provides fast worldwide access to the data.

  14. Communications for unattended sensor networks

    NASA Astrophysics Data System (ADS)

    Nemeroff, Jay L.; Angelini, Paul; Orpilla, Mont; Garcia, Luis; DiPierro, Stefano

    2004-07-01

    The future model of the US Army's Future Combat Systems (FCS) and the Future Force reflects a combat force that utilizes lighter armor protection than the current standard. Survival on the future battlefield will be increased by the use of advanced situational awareness provided by unattended tactical and urban sensors that detect, identify, and track enemy targets and threats. Successful implementation of these critical sensor fields requires the development of advanced sensors, sensor and data-fusion processors, and a specialized communications network. To ensure warfighter and asset survivability, the communications must be capable of near real-time dissemination of the sensor data using robust, secure, stealthy, and jam resistant links so that the proper and decisive action can be taken. Communications will be provided to a wide-array of mission-specific sensors that are capable of processing data from acoustic, magnetic, seismic, and/or Chemical, Biological, Radiological, and Nuclear (CBRN) sensors. Other, more powerful, sensor node configurations will be capable of fusing sensor data and intelligently collect and process data images from infrared or visual imaging cameras. The radio waveform and networking protocols being developed under the Soldier Level Integrated Communications Environment (SLICE) Soldier Radio Waveform (SRW) and the Networked Sensors for the Future Force Advanced Technology Demonstration are part of an effort to develop a common waveform family which will operate across multiple tactical domains including dismounted soldiers, ground sensor, munitions, missiles and robotics. These waveform technologies will ultimately be transitioned to the JTRS library, specifically the Cluster 5 requirement.

  15. Testing and evaluation of tactical electro-optical sensors

    NASA Astrophysics Data System (ADS)

    Middlebrook, Christopher T.; Smith, John G.

    2002-07-01

    As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.

  16. IR sensors and imagers in networked operations

    NASA Astrophysics Data System (ADS)

    Breiter, Rainer; Cabanski, Wolfgang

    2005-05-01

    "Network-centric Warfare" is a common slogan describing an overall concept of networked operation of sensors, information and weapons to gain command and control superiority. Referring to IR sensors, integration and fusion of different channels like day/night or SAR images or the ability to spread image data among various users are typical requirements. Looking for concrete implementations the German Army future infantryman IdZ is an example where a group of ten soldiers build a unit with every soldier equipped with a personal digital assistant (PDA) for information display, day photo camera and a high performance thermal imager for every unit. The challenge to allow networked operation among such a unit is bringing information together and distribution over a capable network. So also AIM's thermal reconnaissance and targeting sight HuntIR which was selected for the IdZ program provides this capabilities by an optional wireless interface. Besides the global approach of Network-centric Warfare network technology can also be an interesting solution for digital image data distribution and signal processing behind the FPA replacing analog video networks or specific point to point interfaces. The resulting architecture can provide capabilities of data fusion from e.g. IR dual-band or IR multicolor sensors. AIM has participated in a German/UK collaboration program to produce a demonstrator for day/IR video distribution via Gigabit Ethernet for vehicle applications. In this study Ethernet technology was chosen for network implementation and a set of electronics was developed for capturing video data of IR and day imagers and Gigabit Ethernet video distribution. The demonstrator setup follows the requirements of current and future vehicles having a set of day and night imager cameras and a crew station with several members. Replacing the analog video path by a digital video network also makes it easy to implement embedded training by simply feeding the network with simulation data. The paper addresses the special capabilities, requirements and design considerations of IR sensors and imagers in applications like thermal weapon sights and UAVs for networked operating infantry forces.

  17. Towards establishing compact imaging spectrometer standards

    USGS Publications Warehouse

    Slonecker, E. Terrence; Allen, David W.; Resmini, Ronald G.

    2016-01-01

    Remote sensing science is currently undergoing a tremendous expansion in the area of hyperspectral imaging (HSI) technology. Spurred largely by the explosive growth of Unmanned Aerial Vehicles (UAV), sometimes called Unmanned Aircraft Systems (UAS), or drones, HSI capabilities that once required access to one of only a handful of very specialized and expensive sensor systems are now miniaturized and widely available commercially. Small compact imaging spectrometers (CIS) now on the market offer a number of hyperspectral imaging capabilities in terms of spectral range and sampling. The potential uses of HSI/CIS on UAVs/UASs seem limitless. However, the rapid expansion of unmanned aircraft and small hyperspectral sensor capabilities has created a number of questions related to technological, legal, and operational capabilities. Lightweight sensor systems suitable for UAV platforms are being advertised in the trade literature at an ever-expanding rate with no standardization of system performance specifications or terms of reference. To address this issue, both the U.S. Geological Survey and the National Institute of Standards and Technology are eveloping draft standards to meet these issues. This paper presents the outline of a combined USGS/NIST cooperative strategy to develop and test a characterization methodology to meet the needs of a new and expanding UAV/CIS/HSI user community.

  18. Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors

    PubMed Central

    Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo

    2017-01-01

    The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors’ radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors’ application, and as such will promote the development of Chinese satellite data. PMID:28117745

  19. The Solid State Image Sensor's Contribution To The Development Of Silicon Technology

    NASA Astrophysics Data System (ADS)

    Weckler, Gene P.

    1985-12-01

    Until recently, a solid-state image sensor with full television resolution was a dream. However, the dream of a solid state image sensor has been a driving force in the development of silicon technology for more than twenty-five years. There are probably many in the main stream of semiconductor technology who would argue with this; however, the solid state image sensor was conceived years before the invention of the semi conductor RAM or the microprocessor (i.e., even before the invention of the integrated circuit). No other potential application envisioned at that time required such complexity. How could anyone have ever hoped in 1960 to make a semi conductor chip containing half-a-million picture elements, capable of resolving eight to twelve bits of infornation, and each capable of readout rates in the tens of mega-pixels per second? As early as 1960 arrays of p-n junctions were being investigated as the optical targets in vidicon tubes, replacing the photoconductive targets. It took silicon technology several years to catch up with these dreamers.

  20. Electron-bombarded CCD detectors for ultraviolet atmospheric remote sensing

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1983-01-01

    Electronic image sensors based on charge coupled devices operated in electron-bombarded mode, yielding real-time, remote-readout, photon-limited UV imaging capability are being developed. The sensors also incorporate fast-focal-ratio Schmidt optics and opaque photocathodes, giving nearly the ultimate possible diffuse-source sensitivity. They can be used for direct imagery of atmospheric emission phenomena, and for imaging spectrography with moderate spatial and spectral resolution. The current state of instrument development, laboratory results, planned future developments and proposed applications of the sensors in space flight instrumentation is described.

  1. Using the Optical Mouse Sensor as a Two-Euro Counterfeit Coin Detector

    PubMed Central

    Tresanchez, Marcel; Pallejà, Tomàs; Teixidó, Mercè; Palacín, Jordi

    2009-01-01

    In this paper, the sensor of an optical mouse is presented as a counterfeit coin detector applied to the two-Euro case. The detection process is based on the short distance image acquisition capabilities of the optical mouse sensor where partial images of the coin under analysis are compared with some partial reference coin images for matching. Results show that, using only the vision sense, the counterfeit acceptance and rejection rates are very similar to those of a trained user and better than those of an untrained user. PMID:22399987

  2. A Forest Fire Sensor Web Concept with UAVSAR

    NASA Astrophysics Data System (ADS)

    Lou, Y.; Chien, S.; Clark, D.; Doubleday, J.; Muellerschoen, R.; Zheng, Y.

    2008-12-01

    We developed a forest fire sensor web concept with a UAVSAR-based smart sensor and onboard automated response capability that will allow us to monitor fire progression based on coarse initial information provided by an external source. This autonomous disturbance detection and monitoring system combines the unique capabilities of imaging radar with high throughput onboard processing technology and onboard automated response capability based on specific science algorithms. In this forest fire sensor web scenario, a fire is initially located by MODIS/RapidFire or a ground-based fire observer. This information is transmitted to the UAVSAR onboard automated response system (CASPER). CASPER generates a flight plan to cover the alerted fire area and executes the flight plan. The onboard processor generates the fuel load map from raw radar data, used with wind and elevation information, predicts the likely fire progression. CASPER then autonomously alters the flight plan to track the fire progression, providing this information to the fire fighting team on the ground. We can also relay the precise fire location to other remote sensing assets with autonomous response capability such as Earth Observation-1 (EO-1)'s hyper-spectral imager to acquire the fire data.

  3. Concept of electro-optical sensor module for sniper detection system

    NASA Astrophysics Data System (ADS)

    Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz

    2010-10-01

    The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.

  4. The plenoptic camera as a wavefront sensor for the European Solar Telescope (EST)

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, Luis F.; Martín, Yolanda; Díaz, José J.; Piqueras, J.; Rodríguez-Ramos, J. M.

    2009-08-01

    The plenoptic wavefront sensor combines measurements at pupil and image planes in order to obtain wavefront information from different points of view simultaneously, being capable to sample the volume above the telescope to extract the tomographic information of the atmospheric turbulence. After describing the working principle, a laboratory setup has been used for the verification of the capability of measuring the pupil plane wavefront. A comparative discussion with respect to other wavefront sensors is also included.

  5. Spaceborne imaging radar research in the 90's

    NASA Technical Reports Server (NTRS)

    Elachi, Charles

    1986-01-01

    The imaging radar experiments on SEASAT and on the space shuttle (SIR-A and SIR-B) have led to a wide interest in the use of spaceborne imaging radars in Earth and planetary sciences. The radar sensors provide unique and complimentary information to what is acquired with visible and infrared imagers. This includes subsurface imaging in arid regions, all weather observation of ocean surface dynamic phenomena, structural mapping, soil moisture mapping, stereo imaging and resulting topographic mapping. However, experiments up to now have exploited only a very limited range of the generic capability of radar sensors. With planned sensor developments in the late 80's and early 90's, a quantum jump will be made in our ability to fully exploit the potential of these sensors. These developments include: multiparameter research sensors such as SIR-C and X-SAR, long-term and global monitoring sensors such as ERS-1, JERS-1, EOS, Radarsat, GLORI and the spaceborne sounder, planetary mapping sensors such as the Magellan and Cassini/Titan mappers, topographic three-dimensional imagers such as the scanning radar altimeter and three-dimensional rain mapping. These sensors and their associated research are briefly described.

  6. Operational calibration and validation of landsat data continuity mission (LDCM) sensors using the image assessment system (IAS)

    USGS Publications Warehouse

    Micijevic, Esad; Morfitt, Ron

    2010-01-01

    Systematic characterization and calibration of the Landsat sensors and the assessment of image data quality are performed using the Image Assessment System (IAS). The IAS was first introduced as an element of the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) ground segment and recently extended to Landsat 4 (L4) and 5 (L5) Thematic Mappers (TM) and Multispectral Sensors (MSS) on-board the Landsat 1-5 satellites. In preparation for the Landsat Data Continuity Mission (LDCM), the IAS was developed for the Earth Observer 1 (EO-1) Advanced Land Imager (ALI) with a capability to assess pushbroom sensors. This paper describes the LDCM version of the IAS and how it relates to unique calibration and validation attributes of its on-board imaging sensors. The LDCM IAS system will have to handle a significantly larger number of detectors and the associated database than the previous IAS versions. An additional challenge is that the LDCM IAS must handle data from two sensors, as the LDCM products will combine the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) spectral bands.

  7. Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12

    DTIC Science & Technology

    2015-09-03

    the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and Meteorological Satellite (COMS) satellite. Additionally, this...this capability works in conjunction with AOPS • Improvements to the AOPS mosaicking capability • Prepare the NRT Geostationary Ocean Color Imager...Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical Data Format (HDF) Integrated Data Processing System (IDPS

  8. Development of a handheld widefield hyperspectral imaging (HSI) sensor for standoff detection of explosive, chemical, and narcotic residues

    NASA Astrophysics Data System (ADS)

    Nelson, Matthew P.; Basta, Andrew; Patil, Raju; Klueva, Oksana; Treado, Patrick J.

    2013-05-01

    The utility of Hyper Spectral Imaging (HSI) passive chemical detection employing wide field, standoff imaging continues to be advanced in detection applications. With a drive for reduced SWaP (Size, Weight, and Power), increased speed of detection and sensitivity, developing a handheld platform that is robust and user-friendly increases the detection capabilities of the end user. In addition, easy to use handheld detectors could improve the effectiveness of locating and identifying threats while reducing risks to the individual. ChemImage Sensor Systems (CISS) has developed the HSI Aperio™ sensor for real time, wide area surveillance and standoff detection of explosives, chemical threats, and narcotics for use in both government and commercial contexts. Employing liquid crystal tunable filter technology, the HSI system has an intuitive user interface that produces automated detections and real-time display of threats with an end user created library of threat signatures that is easily updated allowing for new hazardous materials. Unlike existing detection technologies that often require close proximity for sensing and so endanger operators and costly equipment, the handheld sensor allows the individual operator to detect threats from a safe distance. Uses of the sensor include locating production facilities of illegal drugs or IEDs by identification of materials on surfaces such as walls, floors, doors, deposits on production tools and residue on individuals. In addition, the sensor can be used for longer-range standoff applications such as hasty checkpoint or vehicle inspection of residue materials on surfaces or bulk material identification. The CISS Aperio™ sensor has faster data collection, faster image processing, and increased detection capability compared to previous sensors.

  9. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  10. Compact and portable X-ray imager system using Medipix3RX

    NASA Astrophysics Data System (ADS)

    Garcia-Nathan, T. B.; Kachatkou, A.; Jiang, C.; Omar, D.; Marchal, J.; Changani, H.; Tartoni, N.; van Silfhout, R. G.

    2017-10-01

    In this paper the design and implementation of a novel portable X-ray imager system is presented. The design features a direct X-ray detection scheme by making use of a hybrid detector (Medipix3RX). Taking advantages of the capabilities of the Medipix3RX, like a high resolution, zero dead-time, single photon detection and charge-sharing mode, the imager has a better resolution and higher sensitivity compared to using traditional indirect detection schemes. A detailed description of the system is presented, which consists of a vacuum chamber containing the sensor, an electronic board for temperature management, conditioning and readout of the sensor and a data processing unit which also handles network connection and allow communication with clients by acting as a server. A field programmable gate array (FPGA) device is used to implement the readout protocol for the Medipix3RX, apart from the readout the FPGA can perform complex image processing functions such as feature extraction, histogram, profiling and image compression at high speeds. The temperature of the sensor is monitored and controlled through a PID algorithm making use of a Peltier cooler, improving the energy resolution and response stability of the sensor. Without implementing data compression techniques, the system is capable of transferring 680 profiles/s or 240 images/s in a continuous mode. Implementation of equalization procedures and tests on colour mode are presented in this paper. For the experimental measurements the Medipix3RX sensor was used with a Silicon layer. One of the tested applications of the system is as an X-ray beam position monitor (XBPM) device for synchrotron applications. The XBPM allows a non-destructive real time measurement of the beam position, size and intensity. A Kapton foil is placed in the beam path scattering radiation towards a pinhole camera setup that allows the sensor to obtain an image of the beam. By using profiles of the synchrotron X-ray beam, high frequency movement of the beam position can be studied, up to 340 Hz. The system is capable of realizing an independent energy measure of the beam by using the Medipix3RX variable energy threshold feature.

  11. A High Fidelity Approach to Data Simulation for Space Situational Awareness Missions

    NASA Astrophysics Data System (ADS)

    Hagerty, S.; Ellis, H., Jr.

    2016-09-01

    Space Situational Awareness (SSA) is vital to maintaining our Space Superiority. A high fidelity, time-based simulation tool, PROXOR™ (Proximity Operations and Rendering), supports SSA by generating realistic mission scenarios including sensor frame data with corresponding truth. This is a unique and critical tool for supporting mission architecture studies, new capability (algorithm) development, current/future capability performance analysis, and mission performance prediction. PROXOR™ provides a flexible architecture for sensor and resident space object (RSO) orbital motion and attitude control that simulates SSA, rendezvous and proximity operations scenarios. The major elements of interest are based on the ability to accurately simulate all aspects of the RSO model, viewing geometry, imaging optics, sensor detector, and environmental conditions. These capabilities enhance the realism of mission scenario models and generated mission image data. As an input, PROXOR™ uses a library of 3-D satellite models containing 10+ satellites, including low-earth orbit (e.g., DMSP) and geostationary (e.g., Intelsat) spacecraft, where the spacecraft surface properties are those of actual materials and include Phong and Maxwell-Beard bidirectional reflectance distribution function (BRDF) coefficients for accurate radiometric modeling. We calculate the inertial attitude, the changing solar and Earth illumination angles of the satellite, and the viewing angles from the sensor as we propagate the RSO in its orbit. The synthetic satellite image is rendered at high resolution and aggregated to the focal plane resolution resulting in accurate radiometry even when the RSO is a point source. The sensor model includes optical effects from the imaging system [point spread function (PSF) includes aberrations, obscurations, support structures, defocus], detector effects (CCD blooming, left/right bias, fixed pattern noise, image persistence, shot noise, read noise, and quantization noise), and environmental effects (radiation hits with selectable angular distributions and 4-layer atmospheric turbulence model for ground based sensors). We have developed an accurate flash Light Detection and Ranging (LIDAR) model that supports reconstruction of 3-dimensional information on the RSO. PROXOR™ contains many important imaging effects such as intra-frame smear, realized by oversampling the image in time and capturing target motion and jitter during the integration time.

  12. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS): Sensor improvements for 1994 and 1995

    NASA Technical Reports Server (NTRS)

    Sarture, C. M.; Chrien, T. G.; Green, R. O.; Eastwood, M. L.; Raney, J. J.; Hernandez, M. A.

    1995-01-01

    AVIRIS is a NASA-sponsored Earth-remote-sensing imaging spectrometer designed, built and operated by the Jet Propulsion Laboratory (JPL). While AVIRIS has been operational since 1989, major improvements have been completed in most of the sensor subsystems during the winter maintenance cycles. As a consequence of these efforts, the capabilities of AVIRIS to reliably acquire and deliver consistently high quality, calibrated imaging spectrometer data continue to improve annually, significantly over those in 1989. Improvements to AVIRIS prior to 1994 have been described previously. This paper details recent and planned improvements to AVIRIS in the sensor task.

  13. Integrated sensor with frame memory and programmable resolution for light adaptive imaging

    NASA Technical Reports Server (NTRS)

    Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)

    2004-01-01

    An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.

  14. UTOFIA: an underwater time-of-flight image acquisition system

    NASA Astrophysics Data System (ADS)

    Driewer, Adrian; Abrosimov, Igor; Alexander, Jonathan; Benger, Marc; O'Farrell, Marion; Haugholt, Karl Henrik; Softley, Chris; Thielemann, Jens T.; Thorstensen, Jostein; Yates, Chris

    2017-10-01

    In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream. The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.

  15. Ikhana UAS Overview

    NASA Technical Reports Server (NTRS)

    Rivas, Mauricio

    2017-01-01

    Ikhana demonstrates capabilities of UAS to overfly and collect sensor data on widespread fires throughout Western US and also demonstrate long-endurance mission capabilities (20-hours+). Ikhana images multiple fires (greater than 4 fires per mission), to showcase extendable mission configuration and ability to either linger over key fires or station over disparate regional fires. Ikhana also demonstrates new UAV-compatible, autonomous sensor for improved thermal characterization of fires. Also it provides automated, on-board, terrain and geo-rectified sensor imagery over the horizon SATCOM links to national fire personnel and Incident commanders.

  16. DUSTER: demonstration of an integrated LWIR-VNIR-SAR imaging system

    NASA Astrophysics Data System (ADS)

    Wilson, Michael L.; Linne von Berg, Dale; Kruer, Melvin; Holt, Niel; Anderson, Scott A.; Long, David G.; Margulis, Yuly

    2008-04-01

    The Naval Research Laboratory (NRL) and Space Dynamics Laboratory (SDL) are executing a joint effort, DUSTER (Deployable Unmanned System for Targeting, Exploitation, and Reconnaissance), to develop and test a new tactical sensor system specifically designed for Tier II UAVs. The system is composed of two coupled near-real-time sensors: EyePod (VNIR/LWIR ball gimbal) and NuSAR (L-band synthetic aperture radar). EyePod consists of a jitter-stabilized LWIR sensor coupled with a dual focal-length optical system and a bore-sighted high-resolution VNIR sensor. The dual focal-length design coupled with precision pointing an step-stare capabilities enable EyePod to conduct wide-area survey and high resolution inspection missions from a single flight pass. NuSAR is being developed with partners Brigham Young University (BYU) and Artemis, Inc and consists of a wideband L-band SAR capable of large area survey and embedded real-time image formation. Both sensors employ standard Ethernet interfaces and provide geo-registered NITFS output imagery. In the fall of 2007, field tests were conducted with both sensors, results of which will be presented.

  17. Cross delay line sensor characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, Israel J; Remelius, Dennis K; Tiee, Joe J

    There exists a wealth of information in the scientific literature on the physical properties and device characterization procedures for complementary metal oxide semiconductor (CMOS), charge coupled device (CCD) and avalanche photodiode (APD) format detectors. Numerous papers and books have also treated photocathode operation in the context of photomultiplier tube (PMT) operation for either non imaging applications or limited night vision capability. However, much less information has been reported in the literature about the characterization procedures and properties of photocathode detectors with novel cross delay line (XDL) anode structures. These allow one to detect single photons and create images by recordingmore » space and time coordinate (X, Y & T) information. In this paper, we report on the physical characteristics and performance of a cross delay line anode sensor with an enhanced near infrared wavelength response photocathode and high dynamic range micro channel plate (MCP) gain (> 10{sup 6}) multiplier stage. Measurement procedures and results including the device dark event rate (DER), pulse height distribution, quantum and electronic device efficiency (QE & DQE) and spatial resolution per effective pixel region in a 25 mm sensor array are presented. The overall knowledge and information obtained from XDL sensor characterization allow us to optimize device performance and assess capability. These device performance properties and capabilities make XDL detectors ideal for remote sensing field applications that require single photon detection, imaging, sub nano-second timing response, high spatial resolution (10's of microns) and large effective image format.« less

  18. A bio-image sensor for simultaneous detection of multi-neurotransmitters.

    PubMed

    Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki

    2018-03-01

    We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Log polar image sensor in CMOS technology

    NASA Astrophysics Data System (ADS)

    Scheffer, Danny; Dierickx, Bart; Pardo, Fernando; Vlummens, Jan; Meynants, Guy; Hermans, Lou

    1996-08-01

    We report on the design, design issues, fabrication and performance of a log-polar CMOS image sensor. The sensor is developed for the use in a videophone system for deaf and hearing impaired people, who are not capable of communicating through a 'normal' telephone. The system allows 15 detailed images per second to be transmitted over existing telephone lines. This framerate is sufficient for conversations by means of sign language or lip reading. The pixel array of the sensor consists of 76 concentric circles with (up to) 128 pixels per circle, in total 8013 pixels. The interior pixels have a pitch of 14 micrometers, up to 250 micrometers at the border. The 8013-pixels image is mapped (log-polar transformation) in a X-Y addressable 76 by 128 array.

  20. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  1. Electric potential and electric field imaging

    NASA Astrophysics Data System (ADS)

    Generazio, E. R.

    2017-02-01

    The technology and methods for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field made be used for "illuminating" volumes to be inspected with EFI. The baseline sensor technology (e-Sensor) and its construction, optional electric field generation (quasi-static generator), and current e-Sensor enhancements (ephemeral e-Sensor) are discussed. Demonstrations for structural, electronic, human, and memory applications are shown. This new EFI capability is demonstrated to reveal characterization of electric charge distribution creating a new field of study embracing areas of interest including electrostatic discharge (ESD) mitigation, crime scene forensics, design and materials selection for advanced sensors, dielectric morphology of structures, tether integrity, organic molecular memory, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.

  2. Protection performance evaluation regarding imaging sensors hardened against laser dazzling

    NASA Astrophysics Data System (ADS)

    Ritt, Gunnar; Koerber, Michael; Forster, Daniel; Eberle, Bernd

    2015-05-01

    Electro-optical imaging sensors are widely distributed and used for many different purposes, including civil security and military operations. However, laser irradiation can easily disturb their operational capability. Thus, an adequate protection mechanism for electro-optical sensors against dazzling and damaging is highly desirable. Different protection technologies exist now, but none of them satisfies the operational requirements without any constraints. In order to evaluate the performance of various laser protection measures, we present two different approaches based on triangle orientation discrimination on the one hand and structural similarity on the other hand. For both approaches, image analysis algorithms are applied to images taken of a standard test scene with triangular test patterns which is superimposed by dazzling laser light of various irradiance levels. The evaluation methods are applied to three different sensors: a standard complementary metal oxide semiconductor camera, a high dynamic range camera with a nonlinear response curve, and a sensor hardened against laser dazzling.

  3. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  4. Photon counting phosphorescence lifetime imaging with TimepixCam

    DOE PAGES

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; ...

    2017-01-12

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less

  5. Photon counting phosphorescence lifetime imaging with TimepixCam.

    PubMed

    Hirvonen, Liisa M; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei

    2017-01-01

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.

  6. Photon counting phosphorescence lifetime imaging with TimepixCam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less

  7. Photon counting phosphorescence lifetime imaging with TimepixCam

    NASA Astrophysics Data System (ADS)

    Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei

    2017-01-01

    TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.

  8. Regional Sediment Management Experiment Using the Visible/Infrared Imager/Radiometer Suite and the Landsat Data Continuity Mission Sensor

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.

    2007-01-01

    The central aim of this RPC (Rapid Prototyping Capability) experiment is to demonstrate the use of VIIRS (Visible/Infrared Imager/ Radiometer Suite and LDCM (Landsat Data Continuity Mission) sensors as key input to the RSM (Regional Sediment Management) GIS (geographic information system) DSS (Decision Support System). The project affects the Coastal Management National Application.

  9. High performance thermal imaging for the 21st century

    NASA Astrophysics Data System (ADS)

    Clarke, David J.; Knowles, Peter

    2003-01-01

    In recent years IR detector technology has developed from early short linear arrays. Such devices require high performance signal processing electronics to meet today's thermal imaging requirements for military and para-military applications. This paper describes BAE SYSTEMS Avionics Group's Sensor Integrated Modular Architecture thermal imager which has been developed alongside the group's Eagle 640×512 arrays to provide high performance imaging capability. The electronics architecture also supprots High Definition TV format 2D arrays for future growth capability.

  10. Wavelength-Scanning SPR Imaging Sensors Based on an Acousto-Optic Tunable Filter and a White Light Laser

    PubMed Central

    Zeng, Youjun; Wang, Lei; Wu, Shu-Yuen; He, Jianan; Qu, Junle; Li, Xuejin; Ho, Ho-Pui; Gu, Dayong; Gao, Bruce Zhi; Shao, Yonghong

    2017-01-01

    A fast surface plasmon resonance (SPR) imaging biosensor system based on wavelength interrogation using an acousto-optic tunable filter (AOTF) and a white light laser is presented. The system combines the merits of a wide-dynamic detection range and high sensitivity offered by the spectral approach with multiplexed high-throughput data collection and a two-dimensional (2D) biosensor array. The key feature is the use of AOTF to realize wavelength scan from a white laser source and thus to achieve fast tracking of the SPR dip movement caused by target molecules binding to the sensor surface. Experimental results show that the system is capable of completing a SPR dip measurement within 0.35 s. To the best of our knowledge, this is the fastest time ever reported in the literature for imaging spectral interrogation. Based on a spectral window with a width of approximately 100 nm, a dynamic detection range and resolution of 4.63 × 10−2 refractive index unit (RIU) and 1.27 × 10−6 RIU achieved in a 2D-array sensor is reported here. The spectral SPR imaging sensor scheme has the capability of performing fast high-throughput detection of biomolecular interactions from 2D sensor arrays. The design has no mechanical moving parts, thus making the scheme completely solid-state. PMID:28067766

  11. Electric Potential and Electric Field Imaging with Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Ed

    2016-01-01

    The technology and techniques for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field may be used for (illuminating) volumes to be inspected with EFI. The baseline sensor technology, electric field sensor (e-sensor), and its construction, optional electric field generation (quasistatic generator), and current e-sensor enhancements (ephemeral e-sensor) are discussed. Demonstrations for structural, electronic, human, and memory applications are shown. This new EFI capability is demonstrated to reveal characterization of electric charge distribution, creating a new field of study that embraces areas of interest including electrostatic discharge mitigation, crime scene forensics, design and materials selection for advanced sensors, dielectric morphology of structures, inspection of containers, inspection for hidden objects, tether integrity, organic molecular memory, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.

  12. The Design of Optical Sensor for the Pinhole/Occulter Facility

    NASA Technical Reports Server (NTRS)

    Greene, Michael E.

    1990-01-01

    Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.

  13. Toroidal sensor arrays for real-time photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Bychkov, Anton S.; Cherepetskaya, Elena B.; Karabutov, Alexander A.; Makarov, Vladimir A.

    2017-07-01

    This article addresses theoretical and numerical investigation of image formation in photoacoustic (PA) imaging with complex-shaped concave sensor arrays. The spatial resolution and the size of sensitivity region of PA and laser ultrasonic (LU) imaging systems are assessed using sensitivity maps and spatial resolution maps in the image plane. This paper also discusses the relationship between the size of high-sensitivity regions and the spatial resolution of real-time imaging systems utilizing toroidal arrays. It is shown that the use of arrays with toroidal geometry significantly improves the diagnostic capabilities of PA and LU imaging to investigate biological objects, rocks, and composite materials.

  14. HIRIS (High-Resolution Imaging Spectrometer: Science opportunities for the 1990s. Earth observing system. Volume 2C: Instrument panel report

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The high-resolution imaging spectrometer (HIRIS) is an Earth Observing System (EOS) sensor developed for high spatial and spectral resolution. It can acquire more information in the 0.4 to 2.5 micrometer spectral region than any other sensor yet envisioned. Its capability for critical sampling at high spatial resolution makes it an ideal complement to the MODIS (moderate-resolution imaging spectrometer) and HMMR (high-resolution multifrequency microwave radiometer), lower resolution sensors designed for repetitive coverage. With HIRIS it is possible to observe transient processes in a multistage remote sensing strategy for Earth observations on a global scale. The objectives, science requirements, and current sensor design of the HIRIS are discussed along with the synergism of the sensor with other EOS instruments and data handling and processing requirements.

  15. The application of remote sensing techniques: Technical and methodological issues

    NASA Technical Reports Server (NTRS)

    Polcyn, F. C.; Wagner, T. W.

    1974-01-01

    Capabilities and limitations of modern imaging electromagnetic sensor systems are outlined, and the products of such systems are compared with those of the traditional aerial photographic system. Focus is given to the interface between the rapidly developing remote sensing technology and the information needs of operational agencies, and communication gaps are shown to retard early adoption of the technology by these agencies. An assessment is made of the current status of imaging remote sensors and their potential for the future. Public sources of remote sensor data and several cost comparisons are included.

  16. The Spaceborne Imaging Radar program: SIR-C - The next step toward EOS

    NASA Technical Reports Server (NTRS)

    Evans, Diane; Elachi, Charles; Cimino, Jobea

    1987-01-01

    The NASA Shuttle Imaging Radar SIR-C experiments will investigate earth surface and environment phenomena to deepen understanding of terra firma, biosphere, hydrosphere, cryosphere, and atmosphere components of the earth system, capitalizing on the observational capabilities of orbiting multiparameter radar sensors alone or in combination with other sensors. The SIR-C sensor encompasses an antenna array, an exciter, receivers, a data-handling network, and the ground SAR processor. It will be possible to steer the antenna beam electronically, so that the radar look angle can be varied.

  17. Method and apparatus for distinguishing actual sparse events from sparse event false alarms

    DOEpatents

    Spalding, Richard E.; Grotbeck, Carter L.

    2000-01-01

    Remote sensing method and apparatus wherein sparse optical events are distinguished from false events. "Ghost" images of actual optical phenomena are generated using an optical beam splitter and optics configured to direct split beams to a single sensor or segmented sensor. True optical signals are distinguished from false signals or noise based on whether the ghost image is presence or absent. The invention obviates the need for dual sensor systems to effect a false target detection capability, thus significantly reducing system complexity and cost.

  18. Compact survey and inspection day/night image sensor suite for small unmanned aircraft systems (EyePod)

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Linne von Berg, Dale; Davidson, Morgan; Holt, Niel; Kruer, Melvin; Wilson, Michael L.

    2010-04-01

    EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS). EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared (LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes of operation, and results from recent flight demonstrations.

  19. CMOS active pixel sensor type imaging system on a chip

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)

    2011-01-01

    A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.

  20. Human perception testing methodology for evaluating EO/IR imaging systems

    NASA Astrophysics Data System (ADS)

    Graybeal, John J.; Monfort, Samuel S.; Du Bosq, Todd W.; Familoni, Babajide O.

    2018-04-01

    The U.S. Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD's Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.

  1. Active State Model for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Park, Han; Chien, Steve; Zak, Michail; James, Mark; Mackey, Ryan; Fisher, Forest

    2003-01-01

    The concept of the active state model (ASM) is an architecture for the development of advanced integrated fault-detection-and-isolation (FDI) systems for robotic land vehicles, pilotless aircraft, exploratory spacecraft, or other complex engineering systems that will be capable of autonomous operation. An FDI system based on the ASM concept would not only provide traditional diagnostic capabilities, but also integrate the FDI system under a unified framework and provide mechanism for sharing of information between FDI subsystems to fully assess the overall health of the system. The ASM concept begins with definitions borrowed from psychology, wherein a system is regarded as active when it possesses self-image, self-awareness, and an ability to make decisions itself, such that it is able to perform purposeful motions and other transitions with some degree of autonomy from the environment. For an engineering system, self-image would manifest itself as the ability to determine nominal values of sensor data by use of a mathematical model of itself, and selfawareness would manifest itself as the ability to relate sensor data to their nominal values. The ASM for such a system may start with the closed-loop control dynamics that describe the evolution of state variables. As soon as this model was supplemented with nominal values of sensor data, it would possess self-image. The ability to process the current sensor data and compare them with the nominal values would represent self-awareness. On the basis of self-image and self-awareness, the ASM provides the capability for self-identification, detection of abnormalities, and self-diagnosis.

  2. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  3. Capability of long distance 100  GHz FMCW using a single GDD lamp sensor.

    PubMed

    Levanon, Assaf; Rozban, Daniel; Aharon Akram, Avihai; Kopeika, Natan S; Yitzhaky, Yitzhak; Abramovich, Amir

    2014-12-20

    Millimeter wave (MMW)-based imaging systems are required for applications in medicine, homeland security, concealed weapon detection, and space technology. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The radar system requires that the millimeter wave detector will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the 2D image. New experiments show the capability of long distance FMCW detection by using a large scale Cassegrain projection system, described first (to our knowledge) in this paper. The system presents the capability to employ a long distance of at least 20 m with a low-cost plasma-based glow discharge detector (GDD) focal plane array (FPA). Each point on the object corresponds to a point in the image and includes the distance information. This will enable relatively inexpensive 3D MMW imaging.

  4. Airborne Electro-Optical Sensor Simulation System. Final Report.

    ERIC Educational Resources Information Center

    Hayworth, Don

    The total system capability, including all the special purpose and general purpose hardware comprising the Airborne Electro-Optical Sensor Simulation (AEOSS) System, is described. The functional relationship between hardware portions is described together with interface to the software portion of the computer image generation. Supporting rationale…

  5. Detecting higher-order wavefront errors with an astigmatic hybrid wavefront sensor.

    PubMed

    Barwick, Shane

    2009-06-01

    The reconstruction of wavefront errors from measurements over subapertures can be made more accurate if a fully characterized quadratic surface can be fitted to the local wavefront surface. An astigmatic hybrid wavefront sensor with added neural network postprocessing is shown to have this capability, provided that the focal image of each subaperture is sufficiently sampled. Furthermore, complete local curvature information is obtained with a single image without splitting beam power.

  6. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    NASA Astrophysics Data System (ADS)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported by software Graphical Unit Interface (GUI). They were tested and characterized through different kinds of optical systems for imaging applications, super resolution, and calibration methods. Capability of the 16x16 sensor is to employ a chirp radar like method to produced depth and reflectance information in the image. This enables 3-D MMW imaging in real time with video frame rate. In this work we demonstrate different kinds of optical imaging systems. Those systems have capability of 3-D imaging for short range and longer distances to at least 10-20 meters.

  7. Active microwave remote sensing research program plan. Recommendations of the Earth Resources Synthetic Aperture Radar Task Force. [application areas: vegetation canopies, surface water, surface morphology, rocks and soils, and man-made structures

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A research program plan developed by the Office of Space and Terrestrial Applications to provide guidelines for a concentrated effort to improve the understanding of the measurement capabilities of active microwave imaging sensors, and to define the role of such sensors in future Earth observations programs is outlined. The focus of the planned activities is on renewable and non-renewable resources. Five general application areas are addressed: (1) vegetation canopies, (2) surface water, (3) surface morphology, (4) rocks and soils, and (5) man-made structures. Research tasks are described which, when accomplished, will clearly establish the measurement capabilities in each area, and provide the theoretical and empirical results needed to specify and justify satellite systems using imaging radar sensors for global observations.

  8. STARR: shortwave-targeted agile Raman robot for the detection and identification of emplaced explosives

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Gardner, Charles W.

    2014-05-01

    In order to combat the threat of emplaced explosives (land mines, etc.), ChemImage Sensor Systems (CISS) has developed a multi-sensor, robot mounted sensor capable of identification and confirmation of potential threats. The system, known as STARR (Shortwave-infrared Targeted Agile Raman Robot), utilizes shortwave infrared spectroscopy for the identification of potential threats, combined with a visible short-range standoff Raman hyperspectral imaging (HSI) system for material confirmation. The entire system is mounted onto a Talon UGV (Unmanned Ground Vehicle), giving the sensor an increased area search rate and reducing the risk of injury to the operator. The Raman HSI system utilizes a fiber array spectral translator (FAST) for the acquisition of high quality Raman chemical images, allowing for increased sensitivity and improved specificity. An overview of the design and operation of the system will be presented, along with initial detection results of the fusion sensor.

  9. A Low-Signal-to-Noise-Ratio Sensor Framework Incorporating Improved Nighttime Capabilities in DIRSIG

    NASA Astrophysics Data System (ADS)

    Rizzuto, Anthony P.

    When designing new remote sensing systems, it is difficult to make apples-to-apples comparisons between designs because of the number of sensor parameters that can affect the final image. Using synthetic imagery and a computer sensor model allows for comparisons to be made between widely different sensor designs or between competing design parameters. Little work has been done in fully modeling low-SNR systems end-to-end for these types of comparisons. Currently DIRSIG has limited capability to accurately model nighttime scenes under new moon conditions or near large cities. An improved DIRSIG scene modeling capability is presented that incorporates all significant sources of nighttime radiance, including new models for urban glow and airglow, both taken from the astronomy community. A low-SNR sensor modeling tool is also presented that accounts for sensor components and noise sources to generate synthetic imagery from a DIRSIG scene. The various sensor parameters that affect SNR are discussed, and example imagery is shown with the new sensor modeling tool. New low-SNR detectors have recently been designed and marketed for remote sensing applications. A comparison of system parameters for a state-of-the-art low-SNR sensor is discussed, and a sample design trade study is presented for a hypothetical scene and sensor.

  10. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  11. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    PubMed

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  12. Characterization of modulated time-of-flight range image sensors

    NASA Astrophysics Data System (ADS)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2009-01-01

    A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.

  13. The eyes of LITENING

    NASA Astrophysics Data System (ADS)

    Moser, Eric K.

    2016-05-01

    LITENING is an airborne system-of-systems providing long-range imaging, targeting, situational awareness, target tracking, weapon guidance, and damage assessment, incorporating a laser designator and laser range finders, as well as non-thermal and thermal imaging systems, with multi-sensor boresight. Robust operation is at a premium, and subsystems are partitioned to modular, swappable line-replaceable-units (LRUs) and shop-replaceable-units (SRUs). This presentation will explore design concepts for sensing, data storage, and presentation of imagery associated with the LITENING targeting pod. The "eyes" of LITENING are the electro-optic sensors. Since the initial LITENING II introduction to the US market in the late 90s, as the program has evolved and matured, a series of spiral functional improvements and sensor upgrades have been incorporated. These include laser-illuminated imaging, and more recently, color sensing. While aircraft displays are outside of the LITENING system, updates to the available viewing modules have also driven change, and resulted in increasingly effective ways of utilizing the targeting system. One of the latest LITENING spiral upgrades adds a new capability to display and capture visible-band color imagery, using new sensors. This is an augmentation to the system's existing capabilities, which operate over a growing set of visible and invisible colors, infrared bands, and laser line wavelengths. A COTS visible-band camera solution using a CMOS sensor has been adapted to meet the particular needs associated with the airborne targeting use case.

  14. Vision communications based on LED array and imaging sensor

    NASA Astrophysics Data System (ADS)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  15. Laser beam welding quality monitoring system based in high-speed (10 kHz) uncooled MWIR imaging sensors

    NASA Astrophysics Data System (ADS)

    Linares, Rodrigo; Vergara, German; Gutiérrez, Raúl; Fernández, Carlos; Villamayor, Víctor; Gómez, Luis; González-Camino, Maria; Baldasano, Arturo; Castro, G.; Arias, R.; Lapido, Y.; Rodríguez, J.; Romero, Pablo

    2015-05-01

    The combination of flexibility, productivity, precision and zero-defect manufacturing in future laser-based equipment are a major challenge that faces this enabling technology. New sensors for online monitoring and real-time control of laserbased processes are necessary for improving products quality and increasing manufacture yields. New approaches to fully automate processes towards zero-defect manufacturing demand smarter heads where lasers, optics, actuators, sensors and electronics will be integrated in a unique compact and affordable device. Many defects arising in laser-based manufacturing processes come from instabilities in the dynamics of the laser process. Temperature and heat dynamics are key parameters to be monitored. Low cost infrared imagers with high-speed of response will constitute the next generation of sensors to be implemented in future monitoring and control systems for laser-based processes, capable to provide simultaneous information about heat dynamics and spatial distribution. This work describes the result of using an innovative low-cost high-speed infrared imager based on the first quantum infrared imager monolithically integrated with Si-CMOS ROIC of the market. The sensor is able to provide low resolution images at frame rates up to 10 KHz in uncooled operation at the same cost as traditional infrared spot detectors. In order to demonstrate the capabilities of the new sensor technology, a low-cost camera was assembled on a standard production laser welding head, allowing to register melting pool images at frame rates of 10 kHz. In addition, a specific software was developed for defect detection and classification. Multiple laser welding processes were recorded with the aim to study the performance of the system and its application to the real-time monitoring of laser welding processes. During the experiments, different types of defects were produced and monitored. The classifier was fed with the experimental images obtained. Self-learning strategies were implemented with very promising results, demonstrating the feasibility of using low-cost high-speed infrared imagers in advancing towards a real-time / in-line zero-defect production systems.

  16. Error modeling and analysis of star cameras for a class of 1U spacecraft

    NASA Astrophysics Data System (ADS)

    Fowler, David M.

    As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.

  17. Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems

    NASA Astrophysics Data System (ADS)

    Williams, John W.; Potter, Gary E.

    2002-11-01

    QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.

  18. Detection of electromagnetic radiation using micromechanical multiple quantum wells structures

    DOEpatents

    Datskos, Panagiotis G [Knoxville, TN; Rajic, Slobodan [Knoxville, TN; Datskou, Irene [Knoxville, TN

    2007-07-17

    An apparatus and method for detecting electromagnetic radiation employs a deflectable micromechanical apparatus incorporating multiple quantum wells structures. When photons strike the quantum-well structure, physical stresses are created within the sensor, similar to a "bimetallic effect." The stresses cause the sensor to bend. The extent of deflection of the sensor can be measured through any of a variety of conventional means to provide a measurement of the photons striking the sensor. A large number of such sensors can be arranged in a two-dimensional array to provide imaging capability.

  19. Next Generation Advanced Video Guidance Sensor

    NASA Technical Reports Server (NTRS)

    Lee, Jimmy; Spencer, Susan; Bryan, Tom; Johnson, Jimmie; Robertson, Bryan

    2008-01-01

    The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. The United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport. Systems (COTS) Automated Rendezvous and Docking (AR&D). AVGS has a proven pedigree, based on extensive ground testing and flight demonstrations. The AVGS on the Demonstration of Autonomous Rendezvous Technology (DART)mission operated successfully in "spot mode" out to 2 km. The first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. Parts obsolescence issues prevent the construction of more AVGS. units, and the next generation sensor must be updated to support the CEV and COTS programs. The flight proven AR&D sensor is being redesigned to update parts and add additional. capabilities for CEV and COTS with the development of the Next, Generation AVGS (NGAVGS) at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities might include greater sensor range, auto ranging, and real-time video output. This paper presents an approach to sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It will also discuss approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, parts selection and test plans for the NGAVGS will be addressed to provide a highly reliable flight qualified sensor. Expanded capabilities through innovative use of existing capabilities will also be discussed.

  20. Low-voltage 96 dB snapshot CMOS image sensor with 4.5 nW power dissipation per pixel.

    PubMed

    Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander

    2012-01-01

    Modern "smart" CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage "smart" image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.

  1. Low-Voltage 96 dB Snapshot CMOS Image Sensor with 4.5 nW Power Dissipation per Pixel

    PubMed Central

    Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander

    2012-01-01

    Modern “smart” CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage “smart” image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel. PMID:23112588

  2. GPR Imaging for Deeply Buried Objects: A Comparative Study Based on FDTD Models and Field Experiments

    NASA Technical Reports Server (NTRS)

    Tilley, roger; Dowla, Farid; Nekoogar, Faranak; Sadjadpour, Hamid

    2012-01-01

    Conventional use of Ground Penetrating Radar (GPR) is hampered by variations in background environmental conditions, such as water content in soil, resulting in poor repeatability of results over long periods of time when the radar pulse characteristics are kept the same. Target objects types might include voids, tunnels, unexploded ordinance, etc. The long-term objective of this work is to develop methods that would extend the use of GPR under various environmental and soil conditions provided an optimal set of radar parameters (such as frequency, bandwidth, and sensor configuration) are adaptively employed based on the ground conditions. Towards that objective, developing Finite Difference Time Domain (FDTD) GPR models, verified by experimental results, would allow us to develop analytical and experimental techniques to control radar parameters to obtain consistent GPR images with changing ground conditions. Reported here is an attempt at developing 20 and 3D FDTD models of buried targets verified by two different radar systems capable of operating over different soil conditions. Experimental radar data employed were from a custom designed high-frequency (200 MHz) multi-static sensor platform capable of producing 3-D images, and longer wavelength (25 MHz) COTS radar (Pulse EKKO 100) capable of producing 2-D images. Our results indicate different types of radar can produce consistent images.

  3. Stellar Gyroscope for Determining Attitude of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Hancock, Bruce; Liebe, Carl; Mellstrom, Jeffrey

    2005-01-01

    A paper introduces the concept of a stellar gyroscope, currently at an early stage of development, for determining the attitude or spin axis, and spin rate of a spacecraft. Like star trackers, which are commercially available, a stellar gyroscope would capture and process images of stars to determine the orientation of a spacecraft in celestial coordinates. Star trackers utilize chargecoupled devices as image detectors and are capable of tracking attitudes at spin rates of no more than a few degrees per second and update rates typically <5 Hz. In contrast, a stellar gyroscope would utilize an activepixel sensor as an image detector and would be capable of tracking attitude at a slew rate as high as 50 deg/s, with an update rate as high as 200 Hz. Moreover, a stellar gyroscope would be capable of measuring a slew rate up to 420 deg/s. Whereas a Sun sensor and a three-axis mechanical gyroscope are typically needed to complement a star tracker, a stellar gyroscope would function without them; consequently, the mass, power consumption, and mechanical complexity of an attitude-determination system could be reduced considerably.

  4. Astronomical Polarimetry with the RIT Polarization Imaging Camera

    NASA Astrophysics Data System (ADS)

    Vorobiev, Dmitry V.; Ninkov, Zoran; Brock, Neal

    2018-06-01

    In the last decade, imaging polarimeters based on micropolarizer arrays have been developed for use in terrestrial remote sensing and metrology applications. Micropolarizer-based sensors are dramatically smaller and more mechanically robust than other polarimeters with similar spectral response and snapshot capability. To determine the suitability of these new polarimeters for astronomical applications, we developed the RIT Polarization Imaging Camera to investigate the performance of these devices, with a special attention to the low signal-to-noise regime. We characterized the device performance in the lab, by determining the relative throughput, efficiency, and orientation of every pixel, as a function of wavelength. Using the resulting pixel response model, we developed demodulation procedures for aperture photometry and imaging polarimetry observing modes. We found that, using the current calibration, RITPIC is capable of detecting polarization signals as small as ∼0.3%. The relative ease of data collection, calibration, and analysis provided by these sensors suggest than they may become an important tool for a number of astronomical targets.

  5. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  6. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  7. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  8. pyBSM: A Python package for modeling imaging systems

    NASA Astrophysics Data System (ADS)

    LeMaster, Daniel A.; Eismann, Michael T.

    2017-05-01

    There are components that are common to all electro-optical and infrared imaging system performance models. The purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First, performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate performance of an algorithm used to detect objects in an image.

  9. Engineering workstation: Sensor modeling

    NASA Technical Reports Server (NTRS)

    Pavel, M; Sweet, B.

    1993-01-01

    The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.

  10. Smart sensing surveillance system

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen

    2010-04-01

    An effective public safety sensor system for heavily-populated applications requires sophisticated and geographically-distributed infrastructures, centralized supervision, and deployment of large-scale security and surveillance networks. Artificial intelligence in sensor systems is a critical design to raise awareness levels, improve the performance of the system and adapt to a changing scenario and environment. In this paper, a highly-distributed, fault-tolerant, and energy-efficient Smart Sensing Surveillance System (S4) is presented to efficiently provide a 24/7 and all weather security operation in crowded environments or restricted areas. Technically, the S4 consists of a number of distributed sensor nodes integrated with specific passive sensors to rapidly collect, process, and disseminate heterogeneous sensor data from near omni-directions. These distributed sensor nodes can cooperatively work to send immediate security information when new objects appear. When the new objects are detected, the S4 will smartly select the available node with a Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR camera to track the objects and capture associated imagery. The S4 provides applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. Other imaging processes can be updated to meet specific requirements and operations. In the S4, all the sensor nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology. This UWB RF technology can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The Service Oriented Architecture of S4 enables remote applications to interact with the S4 network and use the specific presentation methods. In addition, the S4 is compliant with Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE) standards to efficiently discover, access, use, and control heterogeneous sensors and their metadata. These S4 capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. The S4 system is directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  11. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  12. NASA 2007 Western States Fire Missions (WSFM)

    NASA Technical Reports Server (NTRS)

    Buoni, Greg

    2008-01-01

    This viewgraph presentation describes the Western states Fire Missions (WSFM) that occurred in 2007. The objectives of this mission are: (1) Demonstrate capabilities of UAS to overfly and collect sensor data on widespread fires throughout Western US. (1) Demonstrate long-endurance mission capabilities (20-hours+). (2) Image multiple fires (greater than 4 fires per mission), to showcase extendable mission configuration and ability to either linger over key fires or station over disparate regional fires. (3) Demonstrate new UAV-compatible, autonomous sensor for improved thermal characterization of fires. (4) Provide automated, on-board, terrain and geo-rectified sensor imagery over OTH satcom links to national fire personnel and Incident commanders. (5) Deliver real-time imagery to (within 10-minutes of acquisition). (6) Demonstrate capabilities of OTS technologies (GoogleEarth) to serve and display mission-critical sensor data, coincident with other pertinent data elements to facilitate information processing (WX data, ground asset data, other satellite data, R/T video, flight track info, etc).

  13. Thermal bioaerosol cloud tracking with Bayesian classification

    NASA Astrophysics Data System (ADS)

    Smith, Christian W.; Dupuis, Julia R.; Schundler, Elizabeth C.; Marinelli, William J.

    2017-05-01

    The development of a wide area, bioaerosol early warning capability employing existing uncooled thermal imaging systems used for persistent perimeter surveillance is discussed. The capability exploits thermal imagers with other available data streams including meteorological data and employs a recursive Bayesian classifier to detect, track, and classify observed thermal objects with attributes consistent with a bioaerosol plume. Target detection is achieved based on similarity to a phenomenological model which predicts the scene-dependent thermal signature of bioaerosol plumes. Change detection in thermal sensor data is combined with local meteorological data to locate targets with the appropriate thermal characteristics. Target motion is tracked utilizing a Kalman filter and nearly constant velocity motion model for cloud state estimation. Track management is performed using a logic-based upkeep system, and data association is accomplished using a combinatorial optimization technique. Bioaerosol threat classification is determined using a recursive Bayesian classifier to quantify the threat probability of each tracked object. The classifier can accept additional inputs from visible imagers, acoustic sensors, and point biological sensors to improve classification confidence. This capability was successfully demonstrated for bioaerosol simulant releases during field testing at Dugway Proving Grounds. Standoff detection at a range of 700m was achieved for as little as 500g of anthrax simulant. Developmental test results will be reviewed for a range of simulant releases, and future development and transition plans for the bioaerosol early warning platform will be discussed.

  14. Time stamping of single optical photons with 10 ns resolution

    NASA Astrophysics Data System (ADS)

    Chakaberia, Irakli; Cotlet, Mircea; Fisher-Levine, Merlin; Hodges, Diedra R.; Nguyen, Jayke; Nomerotski, Andrei

    2017-05-01

    High spatial and temporal resolution are key features for many modern applications, e.g. mass spectrometry, probing the structure of materials via neutron scattering, studying molecular structure, etc.1-5 Fast imaging also provides the capability of coincidence detection, and the further addition of sensitivity to single optical photons with the capability of timestamping them further broadens the field of potential applications. Photon counting is already widely used in X-ray imaging,6 where the high energy of the photons makes their detection easier. TimepixCam is a novel optical imager,7 which achieves high spatial resolution using an array of 256×256 55 μm × 55μm pixels which have individually controlled functionality. It is based on a thin-entrance-window silicon sensor, bump-bonded to a Timepix ASIC.8 TimepixCam provides high quantum efficiency in the optical wavelength range (400-1000 nm). We perform the timestamping of single photons with a time resolution of 20 ns, by coupling TimepixCam to a fast image-intensifier with a P47 phosphor screen. The fast emission time of the P479 allows us to preserve good time resolution while maintaining the capability to focus the optical output of the intensifier onto the 256×256 pixel Timepix sensor area. We demonstrate the capability of the (TimepixCam + image intensifier) setup to provide high-resolution single-photon timestamping, with an effective frame rate of 50 MHz.

  15. Fusion of radar and ultrasound sensors for concealed weapons detection

    NASA Astrophysics Data System (ADS)

    Felber, Franklin S.; Davis, Herbert T., III; Mallon, Charles E.; Wild, Norbert C.

    1996-06-01

    An integrated radar and ultrasound sensor, capable of remotely detecting and imaging concealed weapons, is being developed. A modified frequency-agile, mine-detection radar is intended to specify with high probability of detection at ranges of 1 to 10 m which individuals in a moving crowd may be concealing metallic or nonmetallic weapons. Within about 1 to 5 m, the active ultrasound sensor is intended to enable a user to identify a concealed weapon on a moving person with low false-detection rate, achieved through a real-time centimeter-resolution image of the weapon. The goal for sensor fusion is to have the radar acquire concealed weapons at long ranges and seamlessly hand over tracking data to the ultrasound sensor for high-resolution imaging on a video monitor. We have demonstrated centimeter-resolution ultrasound images of metallic and non-metallic weapons concealed on a human at ranges over 1 m. Processing of the ultrasound images includes filters for noise, frequency, brightness, and contrast. A frequency-agile radar has been developed by JAYCOR under the U.S. Army Advanced Mine Detection Radar Program. The signature of an armed person, detected by this radar, differs appreciably from that of the same person unarmed.

  16. Cross calibration of the Landsat-7 ETM+ and EO-1 ALI sensor

    USGS Publications Warehouse

    Chander, G.; Meyer, D.J.; Helder, D.L.

    2004-01-01

    As part of the Earth Observer 1 (EO-1) Mission, the Advanced Land Imager (ALI) demonstrates a potential technological direction for Landsat Data Continuity Missions. To evaluate ALI's capabilities in this role, a cross-calibration methodology has been developed using image pairs from the Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) and EO-1 (ALI) to verify the radiometric calibration of ALI with respect to the well-calibrated L7 ETM+ sensor. Results have been obtained using two different approaches. The first approach involves calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors. The second approach uses vicarious calibration techniques to compare the predicted top-of-atmosphere radiance derived from ground reference data collected during the overpass to the measured radiance obtained from the sensor. The results indicate that the relative sensor chip assemblies gains agree with the ETM+ visible and near-infrared bands to within 2% and the shortwave infrared bands to within 4%.

  17. Smoothing-Based Relative Navigation and Coded Aperture Imaging

    NASA Technical Reports Server (NTRS)

    Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher

    2017-01-01

    This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.

  18. Vision requirements for Space Station applications

    NASA Technical Reports Server (NTRS)

    Crouse, K. R.

    1985-01-01

    Problems which will be encountered by computer vision systems in Space Station operations are discussed, along with solutions be examined at Johnson Space Station. Lighting cannot be controlled in space, nor can the random presence of reflective surfaces. Task-oriented capabilities are to include docking to moving objects, identification of unexpected objects during autonomous flights to different orbits, and diagnoses of damage and repair requirements for autonomous Space Station inspection robots. The approaches being examined to provide these and other capabilities are television IR sensors, advanced pattern recognition programs feeding on data from laser probes, laser radar for robot eyesight and arrays of SMART sensors for automated location and tracking of target objects. Attention is also being given to liquid crystal light valves for optical processing of images for comparisons with on-board electronic libraries of images.

  19. Terrain Commander: a next-generation remote surveillance system

    NASA Astrophysics Data System (ADS)

    Finneral, Henry J.

    2003-09-01

    Terrain Commander is a fully automated forward observation post that provides the most advanced capability in surveillance and remote situational awareness. The Terrain Commander system was selected by the Australian Government for its NINOX Phase IIB Unattended Ground Sensor Program with the first systems delivered in August of 2002. Terrain Commander offers next generation target detection using multi-spectral peripheral sensors coupled with autonomous day/night image capture and processing. Subsequent intelligence is sent back through satellite communications with unlimited range to a highly sophisticated central monitoring station. The system can "stakeout" remote locations clandestinely for 24 hours a day for months at a time. With its fully integrated SATCOM system, almost any site in the world can be monitored from virtually any other location in the world. Terrain Commander automatically detects and discriminates intruders by precisely cueing its advanced EO subsystem. The system provides target detection capabilities with minimal nuisance alarms combined with the positive visual identification that authorities demand before committing a response. Terrain Commander uses an advanced beamforming acoustic sensor and a distributed array of seismic, magnetic and passive infrared sensors to detect, capture images and accurately track vehicles and personnel. Terrain Commander has a number of emerging military and non-military applications including border control, physical security, homeland defense, force protection and intelligence gathering. This paper reviews the development, capabilities and mission applications of the Terrain Commander system.

  20. Distributed multimodal data fusion for large scale wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Ertin, Emre

    2006-05-01

    Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.

  1. Acoustically modulated magnetic resonance imaging of gas-filled protein nanostructures

    NASA Astrophysics Data System (ADS)

    Lu, George J.; Farhadi, Arash; Szablowski, Jerzy O.; Lee-Gosselin, Audrey; Barnes, Samuel R.; Lakshmanan, Anupama; Bourdeau, Raymond W.; Shapiro, Mikhail G.

    2018-05-01

    Non-invasive biological imaging requires materials capable of interacting with deeply penetrant forms of energy such as magnetic fields and sound waves. Here, we show that gas vesicles (GVs), a unique class of gas-filled protein nanostructures with differential magnetic susceptibility relative to water, can produce robust contrast in magnetic resonance imaging (MRI) at sub-nanomolar concentrations, and that this contrast can be inactivated with ultrasound in situ to enable background-free imaging. We demonstrate this capability in vitro, in cells expressing these nanostructures as genetically encoded reporters, and in three model in vivo scenarios. Genetic variants of GVs, differing in their magnetic or mechanical phenotypes, allow multiplexed imaging using parametric MRI and differential acoustic sensitivity. Additionally, clustering-induced changes in MRI contrast enable the design of dynamic molecular sensors. By coupling the complementary physics of MRI and ultrasound, this nanomaterial gives rise to a distinct modality for molecular imaging with unique advantages and capabilities.

  2. Lensless high-resolution photoacoustic imaging scanner for in vivo skin imaging

    NASA Astrophysics Data System (ADS)

    Ida, Taiichiro; Iwazaki, Hideaki; Omuro, Toshiyuki; Kawaguchi, Yasushi; Tsunoi, Yasuyuki; Kawauchi, Satoko; Sato, Shunichi

    2018-02-01

    We previously launched a high-resolution photoacoustic (PA) imaging scanner based on a unique lensless design for in vivo skin imaging. The design, imaging algorithm and characteristics of the system are described in this paper. Neither an optical lens nor an acoustic lens is used in the system. In the imaging head, four sensor elements are arranged quadrilaterally, and by checking the phase differences for PA waves detected with these four sensors, a set of PA signals only originating from a chromophore located on the sensor center axis is extracted for constructing an image. A phantom study using a carbon fiber showed a depth-independent horizontal resolution of 84.0 ± 3.5 µm, and the scan direction-dependent variation of PA signals was about ± 20%. We then performed imaging of vasculature phantoms: patterns of red ink lines with widths of 100 or 200 μm formed in an acrylic block co-polymer. The patterns were visualized with high contrast, showing the capability for imaging arterioles and venues in the skin. Vasculatures in rat burn models and healthy human skin were also clearly visualized in vivo.

  3. All-optical endoscopic probe for high resolution 3D photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Ansari, R.; Zhang, E.; Desjardins, A. E.; Beard, P. C.

    2017-03-01

    A novel all-optical forward-viewing photoacoustic probe using a flexible coherent fibre-optic bundle and a Fabry- Perot (FP) ultrasound sensor has been developed. The fibre bundle, along with the FP sensor at its distal end, synthesizes a high density 2D array of wideband ultrasound detectors. Photoacoustic waves arriving at the sensor are spatially mapped by optically scanning the proximal end face of the bundle in 2D with a CW wavelength-tunable interrogation laser. 3D images are formed from the detected signals using a time-reversal image reconstruction algorithm. The system has been characterized in terms of its PSF, noise-equivalent pressure and field of view. Finally, the high resolution 3D imaging capability has been demonstrated using arbitrary shaped phantoms and duck embryo.

  4. Validation of Inertial and Optical Navigation Techniques for Space Applications with UAVS

    NASA Astrophysics Data System (ADS)

    Montaño, J.; Wis, M.; Pulido, J. A.; Latorre, A.; Molina, P.; Fernández, E.; Angelats, E.; Colomina, I.

    2015-09-01

    PERIGEO is an R&D project, funded by the INNPRONTA 2011-2014 programme from Spanish CDTI, which aims to investigate the use of UAV technologies and processes for the validation of space oriented technologies. For this purpose, among different space missions and technologies, a set of activities for absolute and relative navigation are being carried out to deal with the attitude and position estimation problem from a temporal image sequence from a camera on the visible spectrum and/or Light Detection and Ranging (LIDAR) sensor. The process is covered entirely: from sensor measurements and data acquisition (images, LiDAR ranges and angles), data pre-processing (calibration and co-registration of camera and LIDAR data), features and landmarks extraction from the images and image/LiDAR-based state estimation. In addition to image processing area, classical navigation system based on inertial sensors is also included in the research. The reason of combining both approaches is to enable the possibility to keep navigation capability in environments or missions where the radio beacon or reference signal as the GNSS satellite is not available (as for example an atmospheric flight in Titan). The rationale behind the combination of those systems is that they complement each other. The INS is capable of providing accurate position, velocity and full attitude estimations at high data rates. However, they need an absolute reference observation to compensate the time accumulative errors caused by inertial sensor inaccuracies. On the other hand, imaging observables can provide absolute and relative positioning and attitude estimations. However they need that the sensor head is pointing toward ground (something that may not be possible if the carrying platform is maneuvering) to provide accurate estimations and they are not capable of provide some hundreds of Hz that can deliver an INS. This mutual complementarity has been observed in PERIGEO and because of this they are combined into one system. The inertial navigation system implemented in PERIGEO is based on a classical loosely coupled INS/GNSS approach that is very similar to the implementation of the INS/Imaging navigation system that is mentioned above. The activities envisaged in PERIGEO cover the algorithms development and validation and technology testing on UAVs under representative conditions. Past activities have covered the design and development of the algorithms and systems. This paper presents the most recent activities and results on the area of image processing for robust estimation within PERIGEO, which are related with the hardware platforms definition (including sensors) and its integration in UAVs. Results for the tests performed during the flight campaigns in representative outdoor environments will be also presented (at the time of the full paper submission the tests will be performed), as well as analyzed, together with a roadmap definition for future developments.

  5. The coronagraphic Modal Wavefront Sensor: a hybrid focal-plane sensor for the high-contrast imaging of circumstellar environments

    NASA Astrophysics Data System (ADS)

    Wilby, M. J.; Keller, C. U.; Snik, F.; Korkiakoski, V.; Pietrow, A. G. M.

    2017-01-01

    The raw coronagraphic performance of current high-contrast imaging instruments is limited by the presence of a quasi-static speckle (QSS) background, resulting from instrumental Non-Common Path Errors (NCPEs). Rapid development of efficient speckle subtraction techniques in data reduction has enabled final contrasts of up to 10-6 to be obtained, however it remains preferable to eliminate the underlying NCPEs at the source. In this work we introduce the coronagraphic Modal Wavefront Sensor (cMWS), a new wavefront sensor suitable for real-time NCPE correction. This combines the Apodizing Phase Plate (APP) coronagraph with a holographic modal wavefront sensor to provide simultaneous coronagraphic imaging and focal-plane wavefront sensing with the science point-spread function. We first characterise the baseline performance of the cMWS via idealised closed-loop simulations, showing that the sensor is able to successfully recover diffraction-limited coronagraph performance over an effective dynamic range of ±2.5 radians root-mean-square (rms) wavefront error within 2-10 iterations, with performance independent of the specific choice of mode basis. We then present the results of initial on-sky testing at the William Herschel Telescope, which demonstrate that the sensor is capable of NCPE sensing under realistic seeing conditions via the recovery of known static aberrations to an accuracy of 10 nm (0.1 radians) rms error in the presence of a dominant atmospheric speckle foreground. We also find that the sensor is capable of real-time measurement of broadband atmospheric wavefront variance (50% bandwidth, 158 nm rms wavefront error) at a cadence of 50 Hz over an uncorrected telescope sub-aperture. When combined with a suitable closed-loop adaptive optics system, the cMWS holds the potential to deliver an improvement of up to two orders of magnitude over the uncorrected QSS floor. Such a sensor would be eminently suitable for the direct imaging and spectroscopy of exoplanets with both existing and future instruments, including EPICS and METIS for the E-ELT.

  6. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  7. SRAO: optical design and the dual-knife-edge WFS

    NASA Astrophysics Data System (ADS)

    Ziegler, Carl; Law, Nicholas M.; Tokovinin, Andrei

    2016-07-01

    The Southern Robotic Adaptive Optics (SRAO) instrument will bring the proven high-efficiency capabilities of Robo-AO to the Southern-Hemisphere, providing the unique capability to image with high-angular-resolution thousands of targets per year across the entire sky. Deployed on the modern 4.1m SOAR telescope located on Cerro Tololo, the NGS AO system will use an innovative dual-knife-edge wavefront sensor, similar to a pyramid sensor, to enable guiding on targets down to V=16 with diffraction limited resolution in the NIR. The dual-knife-edge wavefront sensor can be up to two orders of magnitude less costly than custom glass pyramids, with similar wavefront error sensitivity and minimal chromatic aberrations. SRAO is capable of observing hundreds of targets a night through automation, allowing confirmation and characterization of the large number of exoplanets produced by current and future missions.

  8. VisNAV 100: a robust, compact imaging sensor for enabling autonomous air-to-air refueling of aircraft and unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Katake, Anup; Choi, Heeyoul

    2010-01-01

    To enable autonomous air-to-refueling of manned and unmanned vehicles a robust high speed relative navigation sensor capable of proving high accuracy 3DOF information in diverse operating conditions is required. To help address this problem, StarVision Technologies Inc. has been developing a compact, high update rate (100Hz), wide field-of-view (90deg) direction and range estimation imaging sensor called VisNAV 100. The sensor is fully autonomous requiring no communication from the tanker aircraft and contains high reliability embedded avionics to provide range, azimuth, elevation (3 degrees of freedom solution 3DOF) and closing speed relative to the tanker aircraft. The sensor is capable of providing 3DOF with an error of 1% in range and 0.1deg in azimuth/elevation up to a range of 30m and 1 deg error in direction for ranges up to 200m at 100Hz update rates. In this paper we will discuss the algorithms that were developed in-house to enable robust beacon pattern detection, outlier rejection and 3DOF estimation in adverse conditions and present the results of several outdoor tests. Results from the long range single beacon detection tests will also be discussed.

  9. An update on TED gunshot detection system development status

    NASA Astrophysics Data System (ADS)

    Tidhar, Gil A.; Aphek, Ori; Gurovich, Martin

    2009-05-01

    In recent years the TED system has been under development, starting from new SWIR sensor technology, optics and real-time sensor technologies and following with complete system architecture as a soldier mounted optical gun shot detection system with high precision and imaging means. For the first time, the modules and the concept of operation of the system will be explained, with emphasis on new sensor-to-shooter capabilities. Actual field trial results will be shown.

  10. Sub-arcminute pointing from a balloonborne platform

    NASA Astrophysics Data System (ADS)

    Craig, William W.; McLean, Ryan; Hailey, Charles J.

    1998-07-01

    We describe the design and performance of the pointing and aspect reconstruction system on the Gamma-Ray Arcminute Telescope Imaging System. The payload consists of a 4m long gamma-ray telescope, capable of producing images of the gamma-ray sky at an angular resolution of 2 arcminutes. The telescope is operated at an altitude of 40km in azimuth/elevation pointing mode. Using a variety of sensor, including attitude GPS, fiber optic gyroscopes, star and sun trackers, the system is capable of pointing the gamma-ray payload to within an arc-minute from the balloon borne platform. The system is designed for long-term autonomous operation and performed to specification throughout a recent 36 hour flight from Alice Springs, Australia. A star tracker and pattern recognition software developed for the mission permit aspect reconstruction to better than 10 arcseconds. The narrow field star tracker system is capable of acquiring and identifying a star field without external input. We present flight data form all sensors and the resultant gamma-ray source localizations.

  11. Thinking Outside of the Blue Marble: Novel Ocean Applications Using the VIIRS Sensor

    NASA Technical Reports Server (NTRS)

    Vandermeulen, Ryan A.; Arnone, Robert

    2016-01-01

    While planning for future space-borne sensors will increase the quality, quantity, and duration of ocean observations in the years to come, efforts to extend the limits of sensors currently in orbit can help shed light on future scientific gains as well as associated uncertainties. Here, we present several applications that are unique to the polar orbiting Visual Infrared Imaging Radiometer Suite (VIIRS), each of which challenge the threshold capabilities of the sensor and provide lessons for future missions. For instance, while moderate resolution polar orbiters typically have a one day revisit time, we are able to obtain multiple looks of the same area by focusing on the extreme zenith angles where orbital views overlap, and pair these observations with those from other sensors to create pseudo-geostationary data sets. Or, by exploiting high spatial resolution (imaging) channels and analyzing patterns of synoptic covariance across the visible spectrum, we can obtain higher spatial resolution bio-optical products. Alternatively, non-traditional products can illuminate important biological interactions in the ocean, such as the use of the Day-Night-Band to provide some quantification of phototactic behavior of marine life along light polluted beaches, as well as track the location of marine fishing vessel fleets along ocean fronts. In this talk, we explore ways to take full advantage of the capabilities of existing sensors in order to maximize insights for future missions.

  12. Application and calibration of the subsurface mapping capability of SIR-B in desert regions

    NASA Technical Reports Server (NTRS)

    Schaber, G. G.; Mccauley, J. F.; Breed, C. S.; Grolier, M. J.; Issawi, B.; Haynes, C. V.; Mchugh, W.; Walker, A. S.; Blom, R.

    1984-01-01

    The penetration capability of the shuttle imaging radar (SIR-B) sensor in desert regions is investigated. Refined models to explain this penetration capability in terms of radar physics and regional geologic conditions are devised. The sand-buried radar-rivers discovered in the Western Desert in Egypt and Sudan are defined. Results and procedures developed during previous SIR-A investigation of the same area are extrapolated.

  13. Wavefront detection method of a single-sensor based adaptive optics system.

    PubMed

    Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li

    2015-08-10

    In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS.

  14. A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities.

  15. Imaging Science Panel. Multispectral Imaging Science Working Group joint meeting with Information Science Panel: Introduction

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state-of-the-art of multispectral sensing is reviewed and recommendations for future research and development are proposed. specifically, two generic sensor concepts were discussed. One is the multispectral pushbroom sensor utilizing linear array technology which operates in six spectral bands including two in the SWIR region and incorporates capabilities for stereo and crosstrack pointing. The second concept is the imaging spectrometer (IS) which incorporates a dispersive element and area arrays to provide both spectral and spatial information simultaneously. Other key technology areas included very large scale integration and the computer aided design of these devices.

  16. SCExAO: First Results and On-Sky Performance

    NASA Astrophysics Data System (ADS)

    Currie, Thayne; Guyon, Olivier; Martinache, Frantz; Clergeon, Christophe; McElwain, Michael; Thalmann, Christian; Jovanovic, Nemanja; Singh, Garima; Kudo, Tomoyuki

    2014-01-01

    We present new on-sky results for the Subaru Coronagraphic Extreme Adaptive Optics imager (SCExAO) verifying and quantifying the contrast gain enabled by key components: the closed-loop coronagraphic low-order wavefront sensor (CLOWFS) and focal plane wavefront control (``speckle nulling''). SCExAO will soon be coupled with a high-order, Pyramid wavefront sensor which will yield > 90% Strehl ratio and enable 106-107 contrast at small angular separations allowing us to image gas giant planets at solar system scales. Upcoming instruments like VAMPIRES, FIRST, and CHARIS will expand SCExAO's science capabilities.

  17. A Real-Time Ultraviolet Radiation Imaging System Using an Organic Photoconductive Image Sensor†

    PubMed Central

    Okino, Toru; Yamahira, Seiji; Yamada, Shota; Hirose, Yutaka; Odagawa, Akihiro; Kato, Yoshihisa; Tanaka, Tsuyoshi

    2018-01-01

    We have developed a real time ultraviolet (UV) imaging system that can visualize both invisible UV light and a visible (VIS) background scene in an outdoor environment. As a UV/VIS image sensor, an organic photoconductive film (OPF) imager is employed. The OPF has an intrinsically higher sensitivity in the UV wavelength region than those of conventional consumer Complementary Metal Oxide Semiconductor (CMOS) image sensors (CIS) or Charge Coupled Devices (CCD). As particular examples, imaging of hydrogen flame and of corona discharge is demonstrated. UV images overlapped on background scenes are simply made by on-board background subtraction. The system is capable of imaging weaker UV signals by four orders of magnitude than that of VIS background. It is applicable not only to future hydrogen supply stations but also to other UV/VIS monitor systems requiring UV sensitivity under strong visible radiation environment such as power supply substations. PMID:29361742

  18. Lidar Sensors for Autonomous Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.; Reisse, Robert A.; Pierrottet, Diego F.

    2013-01-01

    Lidar technology will play an important role in enabling highly ambitious missions being envisioned for exploration of solar system bodies. Currently, NASA is developing a set of advanced lidar sensors, under the Autonomous Landing and Hazard Avoidance (ALHAT) project, aimed at safe landing of robotic and manned vehicles at designated sites with a high degree of precision. These lidar sensors are an Imaging Flash Lidar capable of generating high resolution three-dimensional elevation maps of the terrain, a Doppler Lidar for providing precision vehicle velocity and altitude, and a Laser Altimeter for measuring distance to the ground and ground contours from high altitudes. The capabilities of these lidar sensors have been demonstrated through four helicopter and one fixed-wing aircraft flight test campaigns conducted from 2008 through 2012 during different phases of their development. Recently, prototype versions of these landing lidars have been completed for integration into a rocket-powered terrestrial free-flyer vehicle (Morpheus) being built by NASA Johnson Space Center. Operating in closed-loop with other ALHAT avionics, the viability of the lidars for future landing missions will be demonstrated. This paper describes the ALHAT lidar sensors and assesses their capabilities and impacts on future landing missions.

  19. Medipix2 based CdTe microprobe for dental imaging

    NASA Astrophysics Data System (ADS)

    Vykydal, Z.; Fauler, A.; Fiederle, M.; Jakubek, J.; Svestkova, M.; Zwerger, A.

    2011-12-01

    Medical imaging devices and techniques are demanded to provide high resolution and low dose images of samples or patients. Hybrid semiconductor single photon counting devices together with suitable sensor materials and advanced techniques of image reconstruction fulfil these requirements. In particular cases such as the direct observation of dental implants also the size of the imaging device itself plays a critical role. This work presents the comparison of 2D radiographs of tooth provided by a standard commercial dental imaging system (Gendex 765DC X-ray tube with VisualiX scintillation detector) and two Medipix2 USB Lite detectors one equipped with a Si sensor (300 μm thick) and one with a CdTe sensor (1 mm thick). Single photon counting capability of the Medipix2 device allows virtually unlimited dynamic range of the images and thus increases the contrast significantly. The dimensions of the whole USB Lite device are only 15 mm × 60 mm of which 25% consists of the sensitive area. Detector of this compact size can be used directly inside the patients' mouth.

  20. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  1. Automatic panoramic thermal integrated sensor

    NASA Astrophysics Data System (ADS)

    Gutin, Mikhail A.; Tsui, Eddy K.; Gutin, Olga N.

    2005-05-01

    Historically, the US Army has recognized the advantages of panoramic imagers with high image resolution: increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The novel ViperViewTM high-resolution panoramic thermal imager is the heart of the Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) in support of the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to improve situational awareness (SA) in many defense and offensive operations, as well as serve as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The ViperView is as an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS sensor suite include ancillary sensors, advanced power management, and wakeup capability. This paper describes the development status of the APTIS system.

  2. Enhanced tactical radar correlator (ETRAC): true interoperability of the 1990s

    NASA Astrophysics Data System (ADS)

    Guillen, Frank J.

    1994-10-01

    The enhanced tactical radar correlator (ETRAC) system is under development at Westinghouse Electric Corporation for the Army Space Program Office (ASPO). ETRAC is a real-time synthetic aperture radar (SAR) processing system that provides tactical IMINT to the corps commander. It features an open architecture comprised of ruggedized commercial-off-the-shelf (COTS), UNIX based workstations and processors. The architecture features the DoD common SAR processor (CSP), a multisensor computing platform to accommodate a variety of current and future imaging needs. ETRAC's principal functions include: (1) Mission planning and control -- ETRAC provides mission planning and control for the U-2R and ASARS-2 sensor, including capability for auto replanning, retasking, and immediate spot. (2) Image formation -- the image formation processor (IFP) provides the CPU intensive processing capability to produce real-time imagery for all ASARS imaging modes of operation. (3) Image exploitation -- two exploitation workstations are provided for first-phase image exploitation, manipulation, and annotation. Products include INTEL reports, annotated NITF SID imagery, high resolution hard copy prints and targeting data. ETRAC is transportable via two C-130 aircraft, with autonomous drive on/off capability for high mobility. Other autonomous capabilities include rapid setup/tear down, extended stand-alone support, internal environmental control units (ECUs) and power generation. ETRAC's mission is to provide the Army field commander with accurate, reliable, and timely imagery intelligence derived from collections made by the ASARS-2 sensor, located on-board the U-2R aircraft. To accomplish this mission, ETRAC receives video phase history (VPH) directly from the U-2R aircraft and converts it in real time into soft copy imagery for immediate exploitation and dissemination to the tactical users.

  3. A Decade of Satellite Ocean Color Observations

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.

    2009-01-01

    After the successful Coastal Zone Color Scanner (CZCS, 1978-1986), demonstration that quantitative estimations of geophysical variables such as chlorophyll a and diffuse attenuation coefficient could be derived from top of the atmosphere radiances, a number of international missions with ocean color capabilities were launched beginning in the late 1990s. Most notable were those with global data acquisition capabilities, i.e., the Ocean Color and Temperature Sensor (OCTS 1996-1997), the Sea-viewing Wide Field-of-view Sensor (SeaWiFS, United States, 1997-present), two Moderate Resolution Imaging Spectroradiometers, (MODIS, United States, Terra/2000-present and Aqua/2002-present), the Global Imager (GLI, Japan, 2002-2003), and the Medium Resolution Imaging Spectrometer (MERIS, European Space Agency, 2002-present). These missions have provided data of exceptional quality and continuity, allowing for scientific inquiries into a wide variety of marine research topics not possible with the CZCS. This review focuses on the scientific advances made over the past decade using these data sets.

  4. The silicon vidicon: Integration, storage and slow scan capability - Experimental observation of a secondary mode of operation.

    NASA Technical Reports Server (NTRS)

    Ando, K. J.

    1971-01-01

    Description of the performance of the silicon diode array vidicon - an imaging sensor which possesses wide spectral response, high quantum efficiency, and linear response. These characteristics, in addition to its inherent ruggedness, simplicity, and long-term stability and operating life make this device potentially of great usefulness for ground-base and spaceborne planetary and stellar imaging applications. However, integration and charged storage for periods greater than approximately five seconds are not possible at room temperature because of diode saturation from dark current buildup. Since dark current can be reduced by cooling, measurements were made in the range from -65 to 25 C. Results are presented on the extension of integration, storage, and slow scan capabilities achievable by cooling. Integration times in excess of 20 minutes were achieved at the lowest temperatures. The measured results are compared with results obtained with other types of sensors and the advantages of the silicon diode array vidicon for imaging applications are discussed.

  5. ManPortable and UGV LIVAR: advances in sensor suite integration bring improvements to target observation and identification for the electronic battlefield

    NASA Astrophysics Data System (ADS)

    Lynam, Jeff R.

    2001-09-01

    A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.

  6. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  7. Multi-spectral imaging with infrared sensitive organic light emitting diode

    PubMed Central

    Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky

    2014-01-01

    Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589

  8. Multi-spectral imaging with infrared sensitive organic light emitting diode

    NASA Astrophysics Data System (ADS)

    Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky

    2014-08-01

    Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.

  9. In Vivo Deep Tissue Fluorescence and Magnetic Imaging Employing Hybrid Nanostructures.

    PubMed

    Ortgies, Dirk H; de la Cueva, Leonor; Del Rosal, Blanca; Sanz-Rodríguez, Francisco; Fernández, Nuria; Iglesias-de la Cruz, M Carmen; Salas, Gorka; Cabrera, David; Teran, Francisco J; Jaque, Daniel; Martín Rodríguez, Emma

    2016-01-20

    Breakthroughs in nanotechnology have made it possible to integrate different nanoparticles in one single hybrid nanostructure (HNS), constituting multifunctional nanosized sensors, carriers, and probes with great potential in the life sciences. In addition, such nanostructures could also offer therapeutic capabilities to achieve a wider variety of multifunctionalities. In this work, the encapsulation of both magnetic and infrared emitting nanoparticles into a polymeric matrix leads to a magnetic-fluorescent HNS with multimodal magnetic-fluorescent imaging abilities. The magnetic-fluorescent HNS are capable of simultaneous magnetic resonance imaging and deep tissue infrared fluorescence imaging, overcoming the tissue penetration limits of classical visible-light based optical imaging as reported here in living mice. Additionally, their applicability for magnetic heating in potential hyperthermia treatments is assessed.

  10. Imaging Flash Lidar for Autonomous Safe Landing and Spacecraft Proximity Operation

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Roback, Vincent E.; Brewster, Paul F.; Hines, Glenn D.; Bulyshev, Alexander E.

    2016-01-01

    3-D Imaging flash lidar is recognized as a primary candidate sensor for safe precision landing on solar system bodies (Moon, Mars, Jupiter and Saturn moons, etc.), and autonomous rendezvous proximity operations and docking/capture necessary for asteroid sample return and redirect missions, spacecraft docking, satellite servicing, and space debris removal. During the final stages of landing, from about 1 km to 500 m above the ground, the flash lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes. The onboard fli1ght computer can then use the 3-D map of terrain to guide the vehicle to a safe location. As an automated rendezvous and docking sensor, the flash lidar can provide relative range, velocity, and bearing from an approaching spacecraft to another spacecraft or a space station from several kilometers distance. NASA Langley Research Center has developed and demonstrated a flash lidar sensor system capable of generating 16k pixels range images with 7 cm precision, at a 20 Hz frame rate, from a maximum slant range of 1800 m from the target area. This paper describes the lidar instrument design and capabilities as demonstrated by the closed-loop flight tests onboard a rocket-propelled free-flyer vehicle (Morpheus). Then a plan for continued advancement of the flash lidar technology will be explained. This proposed plan is aimed at the development of a common sensor that with a modest design adjustment can meet the needs of both landing and proximity operation and docking applications.

  11. Submicrometer fiber-optic chemical sensors: Measuring pH inside single cells

    NASA Astrophysics Data System (ADS)

    Kopelman, R.

    Starting from scratch, we went in two and a half years to 0.04 micron optical microscopy resolution. We have demonstrated the application of near-field scanning optical microscopy to DNA samples and opened the new fields of near-field scanning spectroscopy and submicron opto-chemical sensors. All of these developments have been important steps towards in-situ DNA imaging and characterization on the nanoscale. Our first goal was to make NSOM (near-field scanning optical microscopy) a working enterprise, capable of 'zooming-in' towards a sample and imaging with a resolution exceeding that of traditional microscopy by a factor of ten. This has been achieved. Not only do we have a resolution of about 40 nm but we can image a 1 x 1 micron object in less than 10 seconds. Furthermore, the NSOM is a practical instrument. The tips survive for days or weeks of scanning and new methods of force feedback will soon protect the most fragile samples. Reproducible images of metal gratings, gold particles, dye balls (for calibration) and of several DNA samples have been made, proving the practicality of our approach. We also give highly resolved Force/NSOM images of human blood cells. Our second goal has been to form molecular optics (e.g., exciton donor) tips with a resolution of 2-10 nm for molecular excitation microscopy (MEM). We have produced such tips, and scanned with them, but only with a resolution comparable to that of our standard NSOM tips. However, we have demonstrated their potential for high resolution imaging capabilities: (1) An energy transfer (tip to sample) based feedback capability. (2) A Kasha (external heavy atom) effect based feedback. In addition, a novel and practical opto-chemical sensor that is a billion times smaller than the best ones available has been developed as well. Finally, we have also performed spatially resolved fluorescence spectroscopy.

  12. Electrical Capacitance Volume Tomography: Design and Applications

    PubMed Central

    Wang, Fei; Marashdeh, Qussai; Fan, Liang-Shih; Warsito, Warsito

    2010-01-01

    This article reports recent advances and progress in the field of electrical capacitance volume tomography (ECVT). ECVT, developed from the two-dimensional electrical capacitance tomography (ECT), is a promising non-intrusive imaging technology that can provide real-time three-dimensional images of the sensing domain. Images are reconstructed from capacitance measurements acquired by electrodes placed on the outside boundary of the testing vessel. In this article, a review of progress on capacitance sensor design and applications to multi-phase flows is presented. The sensor shape, electrode configuration, and the number of electrodes that comprise three key elements of three-dimensional capacitance sensors are illustrated. The article also highlights applications of ECVT sensors on vessels of various sizes from 1 to 60 inches with complex geometries. Case studies are used to show the capability and validity of ECVT. The studies provide qualitative and quantitative real-time three-dimensional information of the measuring domain under study. Advantages of ECVT render it a favorable tool to be utilized for industrial applications and fundamental multi-phase flow research. PMID:22294905

  13. Further applications for mosaic pixel FPA technology

    NASA Astrophysics Data System (ADS)

    Liddiard, Kevin C.

    2011-06-01

    In previous papers to this SPIE forum the development of novel technology for next generation PIR security sensors has been described. This technology combines the mosaic pixel FPA concept with low cost optics and purpose-designed readout electronics to provide a higher performance and affordable alternative to current PIR sensor technology, including an imaging capability. Progressive development has resulted in increased performance and transition from conventional microbolometer fabrication to manufacture on 8 or 12 inch CMOS/MEMS fabrication lines. A number of spin-off applications have been identified. In this paper two specific applications are highlighted: high performance imaging IRFPA design and forest fire detection. The former involves optional design for small pixel high performance imaging. The latter involves cheap expendable sensors which can detect approaching fire fronts and send alarms with positional data via mobile phone or satellite link. We also introduce to this SPIE forum the application of microbolometer IR sensor technology to IoT, the Internet of Things.

  14. Integration of OLEDs in biomedical sensor systems: design and feasibility analysis

    NASA Astrophysics Data System (ADS)

    Rai, Pratyush; Kumar, Prashanth S.; Varadan, Vijay K.

    2010-04-01

    Organic (electronic) Light Emitting Diodes (OLEDs) have been shown to have applications in the field of lighting and flexible display. These devices can also be incorporated in sensors as light source for imaging/fluorescence sensing for miniaturized systems for biomedical applications and low-cost displays for sensor output. The current device capability aligns well with the aforementioned applications as low power diffuse lighting and momentary/push button dynamic display. A top emission OLED design has been proposed that can be incorporated with the sensor and peripheral electrical circuitry, also based on organic electronics. Feasibility analysis is carried out for an integrated optical imaging/sensor system, based on luminosity and spectrum band width. A similar study is also carried out for sensor output display system that functions as a pseudo active OLED matrix. A power model is presented for device power requirements and constraints. The feasibility analysis is also supplemented with the discussion about implementation of ink-jet printing and stamping techniques for possibility of roll to roll manufacturing.

  15. Phase-sensitive X-ray imager

    DOEpatents

    Baker, Kevin Louis

    2013-01-08

    X-ray phase sensitive wave-front sensor techniques are detailed that are capable of measuring the entire two-dimensional x-ray electric field, both the amplitude and phase, with a single measurement. These Hartmann sensing and 2-D Shear interferometry wave-front sensors do not require a temporally coherent source and are therefore compatible with x-ray tubes and also with laser-produced or x-pinch x-ray sources.

  16. Imaging Beyond What Man Can See

    NASA Technical Reports Server (NTRS)

    May, George; Mitchell, Brian

    2004-01-01

    Three lightweight, portable hyperspectral sensor systems have been built that capture energy from 200 to 1700 nanometers (ultravio1et to shortwave infrared). The sensors incorporate a line scanning technique that requires no relative movement between the target and the sensor. This unique capability, combined with portability, opens up new uses of hyperspectral imaging for laboratory and field environments. Each system has a GUI-based software package that allows the user to communicate with the imaging device for setting spatial resolution, spectral bands and other parameters. NASA's Space Partnership Development has sponsored these innovative developments and their application to human problems on Earth and in space. Hyperspectral datasets have been captured and analyzed in numerous areas including precision agriculture, food safety, biomedical imaging, and forensics. Discussion on research results will include realtime detection of food contaminants, molds and toxin research on corn, identifying counterfeit documents, non-invasive wound monitoring and aircraft applications. Future research will include development of a thermal infrared hyperspectral sensor that will support natural resource applications on Earth and thermal analyses during long duration space flight. This paper incorporates a variety of disciplines and imaging technologies that have been linked together to allow the expansion of remote sensing across both traditional and non-traditional boundaries.

  17. Evaluation of Alternate Concepts for Synthetic Vision Flight Displays With Weather-Penetrating Sensor Image Inserts During Simulated Landing Approaches

    NASA Technical Reports Server (NTRS)

    Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.

    2003-01-01

    A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.

  18. Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications

    NASA Astrophysics Data System (ADS)

    Carpentieri, Bruno; Pizzolante, Raffaele

    2017-12-01

    Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.

  19. OSUS sensor integration in Army experiments

    NASA Astrophysics Data System (ADS)

    Ganger, Robert; Nowicki, Mark; Kovach, Jesse; Gregory, Timothy; Liss, Brian

    2016-05-01

    Live sensor data was obtained from an Open Standard for Unattended Sensors (OSUS, formerly Terra Harvest)- based system provided by the Army Research Lab (ARL) and fed into the Communications-Electronics Research, Development and Engineering Center (CERDEC) sponsored Actionable Intelligence Technology Enabled Capabilities Demonstration (AI-TECD) Micro Cloud during the E15 demonstration event that took place at Fort Dix, New Jersey during July 2015. This data was an enabler for other technologies, such as Sensor Assignment to Mission (SAM), Sensor Data Server (SDS), and the AI-TECD Sensor Dashboard, providing rich sensor data (including images) for use by the Company Intel Support Team (CoIST) analyst. This paper describes how the OSUS data was integrated and used in the E15 event to support CoIST operations.

  20. Single-shot digital holography by use of the fractional Talbot effect.

    PubMed

    Martínez-León, Lluís; Araiza-E, María; Javidi, Bahram; Andrés, Pedro; Climent, Vicent; Lancis, Jesús; Tajahuerce, Enrique

    2009-07-20

    We present a method for recording in-line single-shot digital holograms based on the fractional Talbot effect. In our system, an image sensor records the interference between the light field scattered by the object and a properly codified parallel reference beam. A simple binary two-dimensional periodic grating is used to codify the reference beam generating a periodic three-step phase distribution over the sensor plane by fractional Talbot effect. This provides a method to perform single-shot phase-shifting interferometry at frame rates only limited by the sensor capabilities. Our technique is well adapted for dynamic wavefront sensing applications. Images of the object are digitally reconstructed from the digital hologram. Both computer simulations and experimental results are presented.

  1. SCExAO: First Results and On-Sky Performance

    NASA Technical Reports Server (NTRS)

    Currie, Thayne; Guyon, Olivier; Martinache, Frantz; Clergeon, Christophe; McElwain, Michael; Thalmann, Christian; Jovanovic, Nemanja; Singh, Garima; Kudo, Tomoyuki

    2013-01-01

    We present new on-sky results for the Subaru Coronagraphic Extreme Adaptive Optics imager (SCExAO) verifying and quantifying the contrast gain enabled by key components: the closed-loop coronagraphic low-order wavefront sensor (CLOWFS) and focal plane wavefront control ("speckle nulling"). SCExAO will soon be coupled with a high-order, Pyramid wavefront sensor which will yield greater than 90% Strehl ratio and enable 10(exp 6) -10(exp 7) contrast at small angular separations allowing us to image gas giant planets at solar system scales. Upcoming instruments like VAMPIRES, FIRST, and CHARIS will expand SCExAO's science capabilities.

  2. The Characterization of a DIRSIG Simulation Environment to Support the Inter-Calibration of Spaceborne Sensors

    NASA Technical Reports Server (NTRS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-01-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed..

  3. The characterization of a DIRSIG simulation environment to support the inter-calibration of spaceborne sensors

    NASA Astrophysics Data System (ADS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-09-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed.

  4. Three-dimensional estimates of tree canopies: Scaling from high-resolution UAV data to satellite observations

    NASA Astrophysics Data System (ADS)

    Sankey, T.; Donald, J.; McVay, J.

    2015-12-01

    High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.

  5. Image restoration techniques as applied to Landsat MSS and TM data

    USGS Publications Warehouse

    Meyer, David

    1987-01-01

    Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.

  6. Space Shuttle Columbia views the world with imaging radar: The SIR-A experiment

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Cimino, J. B.; Elachi, C.

    1983-01-01

    Images acquired by the Shuttle Imaging Radar (SIR-A) in November 1981, demonstrate the capability of this microwave remote sensor system to perceive and map a wide range of different surface features around the Earth. A selection of 60 scenes displays this capability with respect to Earth resources - geology, hydrology, agriculture, forest cover, ocean surface features, and prominent man-made structures. The combined area covered by the scenes presented amounts to about 3% of the total acquired. Most of the SIR-A images are accompanied by a LANDSAT multispectral scanner (MSS) or SEASAT synthetic-aperture radar (SAR) image of the same scene for comparison. Differences between the SIR-A image and its companion LANDSAT or SEASAT image at each scene are related to the characteristics of the respective imaging systems, and to seasonal or other changes that occurred in the time interval between acquisition of the images.

  7. Theory on data processing and instrumentation. [remote sensing

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1978-01-01

    A selection of NASA Earth observations programs are reviewed, emphasizing hardware capabilities. Sampling theory, noise and detection considerations, and image evaluation are discussed for remote sensor imagery. Vision and perception are considered, leading to numerical image processing. The use of multispectral scanners and of multispectral data processing systems, including digital image processing, is depicted. Multispectral sensing and analysis in application with land use and geographical data systems are also covered.

  8. Attitude determination for high-accuracy submicroradian jitter pointing on space-based platforms

    NASA Astrophysics Data System (ADS)

    Gupta, Avanindra A.; van Houten, Charles N.; Germann, Lawrence M.

    1990-10-01

    A description of the requirement definition process is given for a new wideband attitude determination subsystem (ADS) for image motion compensation (IMC) systems. The subsystem consists of either lateral accelerometers functioning in differential pairs or gas-bearing gyros for high-frequency sensors using CCD-based star trackers for low-frequency sensors. To minimize error the sensor signals are combined so that the mixing filter does not allow phase distortion. The two ADS models are introduced in an IMC simulation to predict measurement error, correction capability, and residual image jitter for a variety of system parameters. The IMC three-axis testbed is utilized to simulate an incoming beam in inertial space. Results demonstrate that both mechanical and electronic IMC meet the requirements of image stabilization for space-based observation at submicroradian-jitter levels. Currently available technology may be employed to implement IMC systems.

  9. Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS): Imaging and Tracking Capability

    NASA Technical Reports Server (NTRS)

    Zhou, D. K.; Larar, A. M.; Liu, Xu; Reisse, R. A.; Smith, W. L.; Revercomb, H. E.; Bingham, G. E.; Zollinger, L. J.; Tansock, J. J.; Huppi, Ronald J.

    2007-01-01

    The geosynchronous-imaging Fourier transform spectrometer (GIFTS) engineering demonstration unit (EDU) is an imaging infrared spectrometer designed for atmospheric soundings. It measures the infrared spectrum in two spectral bands (14.6 to 8.8 microns, 6.0 to 4.4 microns) using two 128 128 detector arrays with a spectral resolution of 0.57/cm with a scan duration of approx. 11 seconds. From a geosynchronous orbit, the instrument will have the capability of taking successive measurements of such data to scan desired regions of the globe, from which atmospheric status, cloud parameters, wind field profiles, and other derived products can be retrieved. The GIFTS EDU provides a flexible and accurate testbed for the new challenges of the emerging hyperspectral era. The EDU ground-based measurement experiment, held in Logan, Utah during September 2006, demonstrated its extensive capabilities and potential for geosynchronous and other applications (e.g., Earth observing environmental measurements). This paper addresses the experiment objectives and overall performance of the sensor system with a focus on the GIFTS EDU imaging capability and proof of the GIFTS measurement concept.

  10. SWIR hyperspectral imaging detector for surface residues

    NASA Astrophysics Data System (ADS)

    Nelson, Matthew P.; Mangold, Paul; Gomer, Nathaniel; Klueva, Oksana; Treado, Patrick

    2013-05-01

    ChemImage has developed a SWIR Hyperspectral Imaging (HSI) sensor which uses hyperspectral imaging for wide area surveillance and standoff detection of surface residues. Existing detection technologies often require close proximity for sensing or detecting, endangering operators and costly equipment. Furthermore, most of the existing sensors do not support autonomous, real-time, mobile platform based detection of threats. The SWIR HSI sensor provides real-time standoff detection of surface residues. The SWIR HSI sensor provides wide area surveillance and HSI capability enabled by liquid crystal tunable filter technology. Easy-to-use detection software with a simple, intuitive user interface produces automated alarms and real-time display of threat and type. The system has potential to be used for the detection of variety of threats including chemicals and illicit drug substances and allows for easy updates in the field for detection of new hazardous materials. SWIR HSI technology could be used by law enforcement for standoff screening of suspicious locations and vehicles in pursuit of illegal labs or combat engineers to support route-clearance applications- ultimately to save the lives of soldiers and civilians. In this paper, results from a SWIR HSI sensor, which include detection of various materials in bulk form, as well as residue amounts on vehicles, people and other surfaces, will be discussed.

  11. Perspective: Advanced particle imaging

    DOE PAGES

    Chandler, David W.; Houston, Paul L.; Parker, David H.

    2017-05-26

    This study discuss, the first ion imaging experiment demonstrating the capability of collecting an image of the photofragments from a unimolecular dissociation event and analyzing that image to obtain the three-dimensional velocity distribution of the fragments, the efficacy and breadth of application of the ion imaging technique have continued to improve and grow. With the addition of velocity mapping, ion/electron centroiding, and slice imaging techniques, the versatility and velocity resolution have been unmatched. Recent improvements in molecular beam, laser, sensor, and computer technology are allowing even more advanced particle imaging experiments, and eventually we can expect multi-mass imaging with co-variancemore » and full coincidence capability on a single shot basis with repetition rates in the kilohertz range. This progress should further enable “complete” experiments—the holy grail of molecular dynamics—where all quantum numbers of reactants and products of a bimolecular scattering event are fully determined and even under our control.« less

  12. A Low Power, Parallel Wearable Multi-Sensor System for Human Activity Evaluation.

    PubMed

    Li, Yuecheng; Jia, Wenyan; Yu, Tianjian; Luan, Bo; Mao, Zhi-Hong; Zhang, Hong; Sun, Mingui

    2015-04-01

    In this paper, the design of a low power heterogeneous wearable multi-sensor system, built with Zynq System-on-Chip (SoC), for human activity evaluation is presented. The powerful data processing capability and flexibility of this SoC represent significant improvements over our previous ARM based system designs. The new system captures and compresses multiple color images and sensor data simultaneously. Several strategies are adopted to minimize power consumption. Our wearable system provides a new tool for the evaluation of human activity, including diet, physical activity and lifestyle.

  13. Large UAS Operations in the NAS - The NASA 2007 Western States Fire Missions (WSFM)

    NASA Technical Reports Server (NTRS)

    Buoni, Gregory P.; Howell, Kathleen M.

    2008-01-01

    Objectives: Demonstrate capabilities of UAS to overfly and collect sensor data on wildfires throughout Western US. Demonstrate long-endurance mission capabilities (20+ hours). Image multiple fires (greater than 4 fires per mission), to showcase extendable mission configuration and ability to either linger over key fires or station over disparate regional fires. Deliver real-time imagery to (within 10-minutes of acquisition).

  14. A novel, optical, on-line bacteria sensor for monitoring drinking water quality

    PubMed Central

    Højris, Bo; Christensen, Sarah Christine Boesgaard; Albrechtsen, Hans-Jørgen; Smith, Christian; Dahlqvist, Mathis

    2016-01-01

    Today, microbial drinking water quality is monitored through either time-consuming laboratory methods or indirect on-line measurements. Results are thus either delayed or insufficient to support proactive action. A novel, optical, on-line bacteria sensor with a 10-minute time resolution has been developed. The sensor is based on 3D image recognition, and the obtained pictures are analyzed with algorithms considering 59 quantified image parameters. The sensor counts individual suspended particles and classifies them as either bacteria or abiotic particles. The technology is capable of distinguishing and quantifying bacteria and particles in pure and mixed suspensions, and the quantification correlates with total bacterial counts. Several field applications have demonstrated that the technology can monitor changes in the concentration of bacteria, and is thus well suited for rapid detection of critical conditions such as pollution events in drinking water. PMID:27040142

  15. A novel, optical, on-line bacteria sensor for monitoring drinking water quality.

    PubMed

    Højris, Bo; Christensen, Sarah Christine Boesgaard; Albrechtsen, Hans-Jørgen; Smith, Christian; Dahlqvist, Mathis

    2016-04-04

    Today, microbial drinking water quality is monitored through either time-consuming laboratory methods or indirect on-line measurements. Results are thus either delayed or insufficient to support proactive action. A novel, optical, on-line bacteria sensor with a 10-minute time resolution has been developed. The sensor is based on 3D image recognition, and the obtained pictures are analyzed with algorithms considering 59 quantified image parameters. The sensor counts individual suspended particles and classifies them as either bacteria or abiotic particles. The technology is capable of distinguishing and quantifying bacteria and particles in pure and mixed suspensions, and the quantification correlates with total bacterial counts. Several field applications have demonstrated that the technology can monitor changes in the concentration of bacteria, and is thus well suited for rapid detection of critical conditions such as pollution events in drinking water.

  16. Luminescent sensing and imaging of oxygen: Fierce competition to the Clark electrode

    PubMed Central

    2015-01-01

    Luminescence‐based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid‐state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle‐based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. PMID:26113255

  17. Fluorescence enhancement of photoswitchable metal ion sensors

    NASA Astrophysics Data System (ADS)

    Sylvia, Georgina; Heng, Sabrina; Abell, Andrew D.

    2016-12-01

    Spiropyran-based fluorescence sensors are an ideal target for intracellular metal ion sensing, due to their biocompatibility, red emission frequency and photo-controlled reversible analyte binding for continuous signal monitoring. However, increasing the brightness of spiropyran-based sensors would extend their sensing capability for live-cell imaging. In this work we look to enhance the fluorescence of spiropyran-based sensors, by incorporating an additional fluorophore into the sensor design. We report a 5-membered monoazacrown bearing spiropyran with metal ion specificity, modified to incorporate the pyrene fluorophore. The effect of N-indole pyrene modification on the behavior of the spiropyran molecule is explored, with absorbance and fluorescence emission characterization. This first generation sensor provides an insight into fluorescence-enhancement of spiropyran molecules.

  18. Swath width study. A simulation assessment of costs and benefits of a sensor system for agricultural application

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Satellites provide an excellent platform from which to observe crops on the scale and frequency required to provide accurate crop production estimates on a worldwide basis. Multispectral imaging sensors aboard these platforms are capable of providing data from which to derive acreage and production estimates. The issue of sensor swath width was examined. The quantitative trade trade necessary to resolve the combined issue of sensor swath width, number of platforms, and their orbits was generated and are included. Problems with different swath width sensors were analyzed and an assessment of system trade-offs of swath width versus number of satellites was made for achieving Global Crop Production Forecasting.

  19. Defining the uncertainty of electro-optical identification system performance estimates using a 3D optical environment derived from satellite

    NASA Astrophysics Data System (ADS)

    Ladner, S. D.; Arnone, R.; Casey, B.; Weidemann, A.; Gray, D.; Shulman, I.; Mahoney, K.; Giddings, T.; Shirron, J.

    2009-05-01

    Current United States Navy Mine-Counter-Measure (MCM) operations primarily use electro-optical identification (EOID) sensors to identify underwater targets after detection via acoustic sensors. These EOID sensors which are based on laser underwater imaging by design work best in "clear" waters and are limited in coastal waters especially with strong optical layers. Optical properties and in particular scattering and absorption play an important role on systems performance. Surface optical properties alone from satellite are not adequate to determine how well a system will perform at depth due to the existence of optical layers. The spatial and temporal characteristics of the 3d optical variability of the coastal waters along with strength and location of subsurface optical layers maximize chances of identifying underwater targets by exploiting optimum sensor deployment. Advanced methods have been developed to fuse the optical measurements from gliders, optical properties from "surface" satellite snapshot and 3-D ocean circulation models to extend the two-dimensional (2-D) surface satellite optical image into a three-dimensional (3-D) optical volume with subsurface optical layers. Modifications were made to an EOID performance model to integrate a 3-D optical volume covering an entire region of interest as input and derive system performance field. These enhancements extend present capability based on glider optics and EOID sensor models to estimate the system's "image quality". This only yields system performance information for a single glider profile location in a very large operational region. Finally, we define the uncertainty of the system performance by coupling the EOID performance model with the 3-D optical volume uncertainties. Knowing the ensemble spread of EOID performance field provides a new and unique capability for tactical decision makers and Navy Operations.

  20. Proximity Operations and Docking Sensor Development

    NASA Technical Reports Server (NTRS)

    Howard, Richard T.; Bryan, Thomas C.; Brewster, Linda L.; Lee, James E.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been under development for the last three years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in spot mode out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. 12 Parts obsolescence issues prevent the construction of more AVGS units, and the next generation sensor was updated to allow it to support the CEV and COTS programs. The flight proven AR&D sensor has been redesigned to update parts and add additional capabilities for CEV and COTS with the development of the Next Generation AVGS at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities include greater sensor range, auto ranging capability, and real-time video output. This paper presents some sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements. In addition, the testing of the brassboard and proto-type NGAVGS units will be discussed along with the use of the NGAVGS as a proximity operations and docking sensor.

  1. Biomedical Imaging

    DTIC Science & Technology

    1994-04-01

    distribution unlimited. United States Army Aeromedical Research Laboratory Fort Rucker, Alabama 36362-0577 Qualified recuesters Qualified requesters may...FUNDING NUMBER5 I PROGRAM zfJECT TASK WORK UNIT ELEMENT NO. NO. ACCESSION NO. 62787A 30162787A87$ EA 138 Biomedical Imaging 12. PERSONAL AUTHOR(S...times larger. Usually they are expensive with commercially available units starting at around $100,000. Triangulation sensors are capable of range

  2. Hyperspectral imaging applied to forensic medicine

    NASA Astrophysics Data System (ADS)

    Malkoff, Donald B.; Oliver, William R.

    2000-03-01

    Remote sensing techniques now include the use of hyperspectral infrared imaging sensors covering the mid-and- long wave regions of the spectrum. They have found use in military surveillance applications due to their capability for detection and classification of a large variety of both naturally occurring and man-made substances. The images they produce reveal the spatial distributions of spectral patterns that reflect differences in material temperature, texture, and composition. A program is proposed for demonstrating proof-of-concept in using a portable sensor of this type for crime scene investigations. It is anticipated to be useful in discovering and documenting the affects of trauma and/or naturally occurring illnesses, as well as detecting blood spills, tire patterns, toxic chemicals, skin injection sites, blunt traumas to the body, fluid accumulations, congenital biochemical defects, and a host of other conditions and diseases. This approach can significantly enhance capabilities for determining the circumstances of death. Potential users include law enforcement organizations (police, FBI, CIA), medical examiners, hospitals/emergency rooms, and medical laboratories. Many of the image analysis algorithms already in place for hyperspectral remote sensing and crime scene investigations can be applied to the interpretation of data obtained in this program.

  3. Development of an MR-compatible hand exoskeleton that is capable of providing interactive robotic rehabilitation during fMRI imaging.

    PubMed

    Kim, Sangjoon J; Kim, Yeongjin; Lee, Hyosang; Ghasemlou, Pouya; Kim, Jung

    2018-02-01

    Following advances in robotic rehabilitation, there have been many efforts to investigate the recovery process and effectiveness of robotic rehabilitation procedures through monitoring the activation status of the brain. This work presents the development of a two degree-of-freedom (DoF) magnetic resonance (MR)-compatible hand device that can perform robotic rehabilitation procedures inside an fMRI scanner. The device is capable of providing real-time monitoring of the joint angle, angular velocity, and joint force produced by the metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints of four fingers. For force measurement, a custom reflective optical force sensor was developed and characterized in terms of accuracy error, hysteresis, and repeatability in the MR environment. The proposed device consists of two non-magnetic ultrasonic motors to provide assistive and resistive forces to the MCP and PIP joints. With actuation and sensing capabilities, both non-voluntary-passive movements and active-voluntary movements can be implemented. The MR compatibility of the device was verified via the analysis of the signal-to-noise ratio (SNR) of MR images of phantoms. SNR drops of 0.25, 2.94, and 11.82% were observed when the device was present but not activated, when only the custom force sensor was activated, and when both the custom force sensor and actuators were activated, respectively.

  4. Processing and analysis of commercial satellite image data of the nuclear accident near Chernobyl, U.S.S.R.

    USGS Publications Warehouse

    Sadowski, Franklin G.; Covington, Steven J.

    1987-01-01

    Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT highresolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear powerplant emergency at Chernobyl in the Soviet Ukraine. The images demonstrate the unique interpretive capabilities provided by the numerous spectral bands of the Thematic Mapper and the high spatial resolution of the SPOT HRV sensor.

  5. Development of a prototype sensor system for ultra-high-speed LDA-PIV

    NASA Astrophysics Data System (ADS)

    Griffiths, Jennifer A.; Royle, Gary J.; Bohndiek, Sarah E.; Turchetta, Renato; Chen, Daoyi

    2008-04-01

    Laser Doppler Anemometry (LDA) and Particle Image Velocimetry (PIV) are commonly used in the analysis of particulates in fluid flows. Despite the successes of these techniques, current instrumentation has placed limitations on the size and shape of the particles undergoing measurement, thus restricting the available data for the many industrial processes now utilising nano/micro particles. Data for spherical and irregularly shaped particles down to the order of 0.1 µm is now urgently required. Therefore, an ultra-fast LDA-PIV system is being constructed for the acquisition of this data. A key component of this instrument is the PIV optical detection system. Both the size and speed of the particles under investigation place challenging constraints on the system specifications: magnification is required within the system in order to visualise particles of the size of interest, but this restricts the corresponding field of view in a linearly inverse manner. Thus, for several images of a single particle in a fast fluid flow to be obtained, the image capture rate and sensitivity of the system must be sufficiently high. In order to fulfil the instrumentation criteria, the optical detection system chosen is a high-speed, lensed, digital imaging system based on state-of-the-art CMOS technology - the 'Vanilla' sensor developed by the UK based MI3 consortium. This novel Active Pixel Sensor is capable of high frame rates and sparse readout. When coupled with an image intensifier, it will have single photon detection capabilities. An FPGA based DAQ will allow real-time operation with minimal data transfer.

  6. Polymer-carbon black composite sensors in an electronic nose for air-quality monitoring

    NASA Technical Reports Server (NTRS)

    Ryan, M. A.; Shevade, A. V.; Zhou, H.; Homer, M. L.

    2004-01-01

    An electronic nose that uses an array of 32 polymer-carbon black composite sensors has been developed, trained, and tested. By selecting a variety of chemical functionalities in the polymers used to make sensors, it is possible to construct an array capable of identifying and quantifying a broad range of target compounds, such as alcohols and aromatics, and distinguishing isomers and enantiomers (mirror-image isomers). A model of the interaction between target molecules and the polymer-carbon black composite sensors is under development to aid in selecting the array members and to enable identification of compounds with responses not stored in the analysis library.

  7. Photonic hydrogel sensors.

    PubMed

    Yetisen, Ali K; Butt, Haider; Volpatti, Lisa R; Pavlichenko, Ida; Humar, Matjaž; Kwok, Sheldon J J; Koo, Heebeom; Kim, Ki Su; Naydenova, Izabela; Khademhosseini, Ali; Hahn, Sei Kwang; Yun, Seok Hyun

    2016-01-01

    Analyte-sensitive hydrogels that incorporate optical structures have emerged as sensing platforms for point-of-care diagnostics. The optical properties of the hydrogel sensors can be rationally designed and fabricated through self-assembly, microfabrication or laser writing. The advantages of photonic hydrogel sensors over conventional assay formats include label-free, quantitative, reusable, and continuous measurement capability that can be integrated with equipment-free text or image display. This Review explains the operation principles of photonic hydrogel sensors, presents syntheses of stimuli-responsive polymers, and provides an overview of qualitative and quantitative readout technologies. Applications in clinical samples are discussed, and potential future directions are identified. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. The NASA Applied Sciences Program: Volcanic Ash Observations and Applications

    NASA Technical Reports Server (NTRS)

    Murray, John J.; Fairlie, Duncan; Green, David; Haynes, John; Krotkov, Nickolai; Meyer, Franz; Pavolonis, Mike; Trepte, Charles; Vernier, Jean-Paul

    2016-01-01

    Since 2000, the NASA Applied Sciences Program has been actively transitioning observations and research to operations. Particular success has been achieved in developing applications for NASA Earth Observing Satellite (EOS) sensors, integrated observing systems, and operational models for volcanic ash detection, characterization, and transport. These include imager applications for sensors such as the MODerate resolution Imaging SpectroRadiometer (MODIS) on NASA Terra and Aqua satellites, and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the NASA/NOAA Suomi NPP satellite; sounder applications for sensors such as the Atmospheric Infrared Sounder (AIRS) on Aqua, and the Cross-track Infrared Sounder (CrIS) on Suomi NPP; UV applications for the Ozone Mapping Instrument (OMI) on the NASA Aura Satellite and the Ozone Mapping Profiler Suite (OMPS) on Suomi NPP including Direct readout capabilities from OMI and OMPS in Alaska (GINA) and Finland (FMI):; and lidar applications from the Caliop instrument coupled with the imaging IR sensor on the NASA/CNES CALIPSO satellite. Many of these applications are in the process of being transferred to the Washington and Alaska Volcanic Ash Advisory Centers (VAAC) where they support operational monitoring and advisory services. Some have also been accepted, transitioned and adapted for direct, onboard, automated product production in future U.S. operational satellite systems including GOES-R, and in automated volcanic cloud detection, characterization and alerting tools at the VAACs. While other observations and applications remain to be developed for the current constellation of NASA EOS sensors and integrated with observing and forecast systems, future requirements and capabilities for volcanic ash observations and applications are also being developed. Many of these are based on technologies currently being tested on NASA aircraft, Unmanned Aerial Systems (UAS) and balloons. All of these efforts and the potential advances that will be realized by integrating them are shared in this presentation.

  9. Wedge imaging spectrometer: application to drug and pollution law enforcement

    NASA Astrophysics Data System (ADS)

    Elerding, George T.; Thunen, John G.; Woody, Loren M.

    1991-08-01

    The Wedge Imaging Spectrometer (WIS) represents a novel implementation of an imaging spectrometer sensor that is compact and rugged and, therefore, suitable for use in drug interdiction and pollution monitoring activities. With performance characteristics equal to comparable conventional imaging spectrometers, it would be capable of detecting and identifying primary and secondary indicators of drug activities and pollution events. In the design, a linear wedge filter is mated to an area array of detectors to achieve two-dimensional sampling of the combined spatial/spectral information passed by the filter. As a result, the need for complex and delicate fore optics is avoided, and the size and weight of the instrument are approximately 50% that of comparable sensors. Spectral bandwidths can be controlled to provide relatively narrow individual bandwidths over a broad spectrum, including all visible and infrared wavelengths. This sensor concept has been under development at the Hughes Aircraft Co. Santa Barbara Research Center (SBRC), and hardware exists in the form of a brassboard prototype. This prototype provides 64 spectral bands over the visible and near infrared region (0.4 to 1.0 micrometers ). Implementation issues have been examined, and plans have been formulated for packaging the sensor into a test-bed aircraft for demonstration of capabilities. Two specific areas of utility to the drug interdiction problem are isolated: (1) detection and classification of narcotic crop growth areas and (2) identification of coca processing sites, cued by the results of broad-area survey and collateral information. Vegetation stress and change-detection processing may also be useful in detecting active from dormant airfields. For pollution monitoring, a WIS sensor could provide data with fine spectral and spatial resolution over suspect areas. On-board or ground processing of the data would isolate the presence of polluting effluents, effects on vegetation caused by airborne or other pollutants, or anomalous ground conditions indicative of buried or dumped toxic materials.

  10. A survey on sensor coverage and visual data capturing/processing/transmission in wireless visual sensor networks.

    PubMed

    Yap, Florence G H; Yen, Hong-Hsu

    2014-02-20

    Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/ transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/ processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs.

  11. A Survey on Sensor Coverage and Visual Data Capturing/Processing/Transmission in Wireless Visual Sensor Networks

    PubMed Central

    Yap, Florence G. H.; Yen, Hong-Hsu

    2014-01-01

    Wireless Visual Sensor Networks (WVSNs) where camera-equipped sensor nodes can capture, process and transmit image/video information have become an important new research area. As compared to the traditional wireless sensor networks (WSNs) that can only transmit scalar information (e.g., temperature), the visual data in WVSNs enable much wider applications, such as visual security surveillance and visual wildlife monitoring. However, as compared to the scalar data in WSNs, visual data is much bigger and more complicated so intelligent schemes are required to capture/process/transmit visual data in limited resources (hardware capability and bandwidth) WVSNs. WVSNs introduce new multi-disciplinary research opportunities of topics that include visual sensor hardware, image and multimedia capture and processing, wireless communication and networking. In this paper, we survey existing research efforts on the visual sensor hardware, visual sensor coverage/deployment, and visual data capture/processing/transmission issues in WVSNs. We conclude that WVSN research is still in an early age and there are still many open issues that have not been fully addressed. More new novel multi-disciplinary, cross-layered, distributed and collaborative solutions should be devised to tackle these challenging issues in WVSNs. PMID:24561401

  12. Optimization of CMOS image sensor utilizing variable temporal multisampling partial transfer technique to achieve full-frame high dynamic range with superior low light and stop motion capability

    NASA Astrophysics Data System (ADS)

    Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay

    2018-03-01

    Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.

  13. WE-AB-BRA-11: Improved Imaging of Permanent Prostate Brachytherapy Seed Implants by Combining an Endorectal X-Ray Sensor with a CT Scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, J; Matthews, K; Jia, G

    Purpose: To test feasibility of the use of a digital endorectal x-ray sensor for improved image resolution of permanent brachytherapy seed implants compared to conventional CT. Methods: Two phantoms simulating the male pelvic region were used to test the capabilities of a digital endorectal x-ray sensor for imaging permanent brachytherapy seed implants. Phantom 1 was constructed from acrylic plastic with cavities milled in the locations of the prostate and the rectum. The prostate cavity was filled a Styrofoam plug implanted with 10 training seeds. Phantom 2 was constructed from tissue-equivalent gelatins and contained a prostate phantom implanted with 18 strandsmore » of training seeds. For both phantoms, an intraoral digital dental x-ray sensor was placed in the rectum within 2 cm of the seed implants. Scout scans were taken of the phantoms over a limited arc angle using a CT scanner (80 kV, 120–200 mA). The dental sensor was removed from the phantoms and normal helical CT and scout (0 degree) scans using typical parameters for pelvic CT (120 kV, auto-mA) were collected. A shift-and add tomosynthesis algorithm was developed to localize seed plane location normal to detector face. Results: The endorectal sensor produced images with improved resolution compared to CT scans. Seed clusters and individual seed geometry were more discernable using the endorectal sensor. Seed 3D locations, including seeds that were not located in every projection image, were discernable using the shift and add algorithm. Conclusion: This work shows that digital endorectal x-ray sensors are a feasible method for improving imaging of permanent brachytherapy seed implants. Future work will consist of optimizing the tomosynthesis technique to produce higher resolution, lower dose images of 1) permanent brachytherapy seed implants for post-implant dosimetry and 2) fine anatomic details for imaging and managing prostatic disease compared to CT images. Funding: LSU Faculty Start-up Funding. Disclosure: XDR Radiography has loaned our research group the digital x-ray detector used in this work. CoI: None.« less

  14. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  15. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  16. Compact LWIR sensors using spatial interferometric technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Bingham, Adam L.; Lucey, Paul G.; Knobbe, Edward T.

    2017-05-01

    Recent developments in reducing the cost and mass of hyperspectral sensors have enabled more widespread use for short range compositional imaging applications. HSI in the long wave infrared (LWIR) is of interest because it is sensitive to spectral phenomena not accessible to other wavelengths, and because of its inherent thermal imaging capability. At Spectrum Photonics we have pursued compact LWIR hyperspectral sensors both using microbolometer arrays and compact cryogenic detector cameras. Our microbolometer-based systems are principally aimed at short standoff applications, currently weigh 10-15 lbs and feature sizes approximately 20x20x10 cm, with sensitivity in the 1-2 microflick range, and imaging times on the order of 30 seconds. Our systems that employ cryogenic arrays are aimed at medium standoff ranges such as nadir looking missions from UAVs. Recent work with cooled sensors has focused on Strained Layer Superlattice (SLS) technology, as these detector arrays are undergoing rapid improvements, and have some advantages compared to HgCdTe detectors in terms of calibration stability. These sensors include full on-board processing sensor stabilization so are somewhat larger than the microbolometer systems, but could be adapted to much more compact form factors. We will review our recent progress in both these application areas.

  17. Finite element model for MOI applications using A-V formulation

    NASA Astrophysics Data System (ADS)

    Xuan, L.; Shanker, B.; Udpa, L.; Shih, W.; Fitzpatrick, G.

    2001-04-01

    Magneto-optic imaging (MOI) is a relatively new sensor application of an extension of bubble memory technology to NDT and produce easy-to-interpret, real time analog images. MOI systems use a magneto-optic (MO) sensor to produce analog images of magnetic flux leakage from surface and subsurface defects. The instrument's capability in detecting the relatively weak magnetic fields associated with subsurface defects depends on the sensitivity of the magneto-optic sensor. The availability of a theoretical model that can simulate the MOI system performance is extremely important for optimization of the MOI sensor and hardware system. A nodal finite element model based on magnetic vector potential formulation has been developed for simulating MOI phenomenon. This model has been used for predicting the magnetic fields in simple test geometry with corrosion dome defects. In the case of test samples with multiple discontinuities, a more robust model using the magnetic vector potential Ā and electrical scalar potential V is required. In this paper, a finite element model based on A-V formulation is developed to model complex circumferential crack under aluminum rivets in dimpled countersink.

  18. An SSM/I radiometer simulator for studies of microwave emission from soil. [Special Sensor Microwave/Imager

    NASA Technical Reports Server (NTRS)

    Galantowicz, J. F.; England, A. W.

    1992-01-01

    A ground-based simulator of the defense meterological satellite program special sensor microwave/imager (DMSP SSM/I) is described, and its integration with micrometeorological instrumentation for an investigation of microwave emission from moist and frozen soils is discussed. The simulator consists of three single polarization radiometers which are capable of both Dicke radiometer and total power radiometer modes of operation. The radiometers are designed for untended operation through a local computer and a daily telephone link to a laboratory. The functional characteristics of the radiometers are described, together with their field deployment configuration and an example of performance parameters.

  19. OmniBird: a miniature PTZ NIR sensor system for UCAV day/night autonomous operations

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Li, Hui

    2007-04-01

    Through a SBIR funding from NAVAIR, we have successfully developed an innovative, miniaturized, and lightweight PTZ UCAV imager called OmniBird for UCAV taxiing. The proposed OmniBird will be able to fit in a small space. The designed zoom capability allows it to acquire focused images for targets ranging from 10 to 250 feet. The innovative panning mechanism also allows the system to have a field of view of +/- 100 degrees within the provided limited spacing (6 cubic inches). The integrated optics, camera sensor, and mechanics solution will allow the OmniBird to stay optically aligned and shock-proof under harsh environments.

  20. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  1. Swap intensified WDR CMOS module for I2/LWIR fusion

    NASA Astrophysics Data System (ADS)

    Ni, Yang; Noguier, Vincent

    2015-05-01

    The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.

  2. Resolution enhancement in integral microscopy by physical interpolation.

    PubMed

    Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel

    2015-08-01

    Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens.

  3. Resolution enhancement in integral microscopy by physical interpolation

    PubMed Central

    Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel

    2015-01-01

    Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens. PMID:26309749

  4. Remote observations of reentering spacecraft including the space shuttle orbiter

    NASA Astrophysics Data System (ADS)

    Horvath, Thomas J.; Cagle, Melinda F.; Grinstead, Jay H.; Gibson, David M.

    Flight measurement is a critical phase in development, validation and certification processes of technologies destined for future civilian and military operational capabilities. This paper focuses on several recent NASA-sponsored remote observations that have provided unique engineering and scientific insights of reentry vehicle flight phenomenology and performance that could not necessarily be obtained with more traditional instrumentation methods such as onboard discrete surface sensors. The missions highlighted include multiple spatially-resolved infrared observations of the NASA Space Shuttle Orbiter during hypersonic reentry from 2009 to 2011, and emission spectroscopy of comparatively small-sized sample return capsules returning from exploration missions. Emphasis has been placed upon identifying the challenges associated with these remote sensing missions with focus on end-to-end aspects that include the initial science objective, selection of the appropriate imaging platform and instrumentation suite, target flight path analysis and acquisition strategy, pre-mission simulations to optimize sensor configuration, logistics and communications during the actual observation. Explored are collaborative opportunities and technology investments required to develop a next-generation quantitative imaging system (i.e., an intelligent sensor and platform) with greater capability, which could more affordably support cross cutting civilian and military flight test needs.

  5. Remote Observations of Reentering Spacecraft Including the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Horvath, Thomas J.; Cagle, Melinda F.; Grinstead, jay H.; Gibson, David

    2013-01-01

    Flight measurement is a critical phase in development, validation and certification processes of technologies destined for future civilian and military operational capabilities. This paper focuses on several recent NASA-sponsored remote observations that have provided unique engineering and scientific insights of reentry vehicle flight phenomenology and performance that could not necessarily be obtained with more traditional instrumentation methods such as onboard discrete surface sensors. The missions highlighted include multiple spatially-resolved infrared observations of the NASA Space Shuttle Orbiter during hypersonic reentry from 2009 to 2011, and emission spectroscopy of comparatively small-sized sample return capsules returning from exploration missions. Emphasis has been placed upon identifying the challenges associated with these remote sensing missions with focus on end-to-end aspects that include the initial science objective, selection of the appropriate imaging platform and instrumentation suite, target flight path analysis and acquisition strategy, pre-mission simulations to optimize sensor configuration, logistics and communications during the actual observation. Explored are collaborative opportunities and technology investments required to develop a next-generation quantitative imaging system (i.e., an intelligent sensor and platform) with greater capability, which could more affordably support cross cutting civilian and military flight test needs.

  6. Evaluation of multi-resolution satellite sensors for assessing water quality and bottom depth of Lake Garda.

    PubMed

    Giardino, Claudia; Bresciani, Mariano; Cazzaniga, Ilaria; Schenk, Karin; Rieger, Patrizia; Braga, Federica; Matta, Erica; Brando, Vittorio E

    2014-12-15

    In this study we evaluate the capabilities of three satellite sensors for assessing water composition and bottom depth in Lake Garda, Italy. A consistent physics-based processing chain was applied to Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat-8 Operational Land Imager (OLI) and RapidEye. Images gathered on 10 June 2014 were corrected for the atmospheric effects with the 6SV code. The computed remote sensing reflectance (Rrs) from MODIS and OLI were converted into water quality parameters by adopting a spectral inversion procedure based on a bio-optical model calibrated with optical properties of the lake. The same spectral inversion procedure was applied to RapidEye and to OLI data to map bottom depth. In situ measurements of Rrs and of concentrations of water quality parameters collected in five locations were used to evaluate the models. The bottom depth maps from OLI and RapidEye showed similar gradients up to 7 m (r = 0.72). The results indicate that: (1) the spatial and radiometric resolutions of OLI enabled mapping water constituents and bottom properties; (2) MODIS was appropriate for assessing water quality in the pelagic areas at a coarser spatial resolution; and (3) RapidEye had the capability to retrieve bottom depth at high spatial resolution. Future work should evaluate the performance of the three sensors in different bio-optical conditions.

  7. EOID Model Validation and Performance Prediction

    DTIC Science & Technology

    2002-09-30

    Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The two most prominent technologies in this area

  8. Proof-of-concept demonstration of a miniaturized three-channel multiresolution imaging system

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Vervaeke, Michael; Van Erps, Jürgen; Thienpont, Hugo

    2014-05-01

    Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.

  9. Geometric correction and digital elevation extraction using multiple MTI datasets

    USGS Publications Warehouse

    Mercier, Jeffrey A.; Schowengerdt, Robert A.; Storey, James C.; Smith, Jody L.

    2007-01-01

    Digital Elevation Models (DEMs) are traditionally acquired from a stereo pair of aerial photographs sequentially captured by an airborne metric camera. Standard DEM extraction techniques can be naturally extended to satellite imagery, but the particular characteristics of satellite imaging can cause difficulties. The spacecraft ephemeris with respect to the ground site during image collects is the most important factor in the elevation extraction process. When the angle of separation between the stereo images is small, the extraction process typically produces measurements with low accuracy, while a large angle of separation can cause an excessive number of erroneous points in the DEM from occlusion of ground areas. The use of three or more images registered to the same ground area can potentially reduce these problems and improve the accuracy of the extracted DEM. The pointing capability of some sensors, such as the Multispectral Thermal Imager (MTI), allows for multiple collects of the same area from different perspectives. This functionality of MTI makes it a good candidate for the implementation of a DEM extraction algorithm using multiple images for improved accuracy. Evaluation of this capability and development of algorithms to geometrically model the MTI sensor and extract DEMs from multi-look MTI imagery are described in this paper. An RMS elevation error of 6.3-meters is achieved using 11 ground test points, while the MTI band has a 5-meter ground sample distance.

  10. Real-time, wide-area hyperspectral imaging sensors for standoff detection of explosives and chemical warfare agents

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Tazik, Shawna; Gardner, Charles W.; Nelson, Matthew P.

    2017-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the detection and analysis of targets located within complex backgrounds. HSI can detect threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Unfortunately, current generation HSI systems have size, weight, and power limitations that prohibit their use for field-portable and/or real-time applications. Current generation systems commonly provide an inefficient area search rate, require close proximity to the target for screening, and/or are not capable of making real-time measurements. ChemImage Sensor Systems (CISS) is developing a variety of real-time, wide-field hyperspectral imaging systems that utilize shortwave infrared (SWIR) absorption and Raman spectroscopy. SWIR HSI sensors provide wide-area imagery with at or near real time detection speeds. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focusing on sensor design and detection results.

  11. Solid-State Multi-Sensor Array System for Real Time Imaging of Magnetic Fields and Ferrous Objects

    NASA Astrophysics Data System (ADS)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2008-02-01

    In this paper the development of a solid-state sensors based system for real-time imaging of magnetic fields and ferrous objects is described. The system comprises 1089 magneto inductive solid state sensors arranged in a 2D array matrix of 33×33 files and columns, equally spaced in order to cover an approximate area of 300 by 300 mm. The sensor array is located within a large current-carrying coil. Data is sampled from the sensors by several DSP controlling units and finally streamed to a host computer via a USB 2.0 interface and the image generated and displayed at a rate of 20 frames per minute. The development of the instrumentation has been complemented by extensive numerical modeling of field distribution patterns using boundary element methods. The system was originally intended for deployment in the non-destructive evaluation (NDE) of reinforced concrete. Nevertheless, the system is not only capable of producing real-time, live video images of the metal target embedded within any opaque medium, it also allows the real-time visualization and determination of the magnetic field distribution emitted by either permanent magnets or geometries carrying current. Although this system was initially developed for the NDE arena, it could also have many potential applications in many other fields, including medicine, security, manufacturing, quality assurance and design involving magnetic fields.

  12. Electric Potential and Electric Field Imaging with Dynamic Applications: 2017 Research Award Innovation

    NASA Technical Reports Server (NTRS)

    Generazio, Ed

    2017-01-01

    The technology and methods for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field may be used for illuminating volumes to be inspected with EFI. The baseline sensor technology (e-Sensor) and its construction, optional electric field generation (quasi-static generator), and current e- Sensor enhancements (ephemeral e-Sensor) are discussed. Critical design elements of current linear and real-time two-dimensional (2D) measurement systems are highlighted, and the development of a three dimensional (3D) EFI system is presented. Demonstrations for structural, electronic, human, and memory applications are shown. Recent work demonstrates that phonons may be used to create and annihilate electric dipoles within structures. Phonon induced dipoles are ephemeral and their polarization, strength, and location may be quantitatively characterized by EFI providing a new subsurface Phonon-EFI imaging technology. Initial results from real-time imaging of combustion and ion flow, and their measurement complications, will be discussed. These new EFI capabilities are demonstrated to characterize electric charge distribution creating a new field of study embracing areas of interest including electrostatic discharge (ESD) mitigation, crime scene forensics, design and materials selection for advanced sensors, combustion science, on-orbit space potential, container inspection, remote characterization of electronic circuits and level of activation, dielectric morphology of structures, tether integrity, organic molecular memory, atmospheric science, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.

  13. Radiometric and geometric assessment of data from the RapidEye constellation of satellites

    USGS Publications Warehouse

    Chander, Gyanesh; Haque, Md. Obaidul; Sampath, Aparajithan; Brunn, A.; Trosset, G.; Hoffmann, D.; Roloff, S.; Thiele, M.; Anderson, C.

    2013-01-01

    To monitor land surface processes over a wide range of temporal and spatial scales, it is critical to have coordinated observations of the Earth's surface using imagery acquired from multiple spaceborne imaging sensors. The RapidEye (RE) satellite constellation acquires high-resolution satellite images covering the entire globe within a very short period of time by sensors identical in construction and cross-calibrated to each other. To evaluate the RE high-resolution Multi-spectral Imager (MSI) sensor capabilities, a cross-comparison between the RE constellation of sensors was performed first using image statistics based on large common areas observed over pseudo-invariant calibration sites (PICS) by the sensors and, second, by comparing the on-orbit radiometric calibration temporal trending over a large number of calibration sites. For any spectral band, the individual responses measured by the five satellites of the RE constellation were found to differ <2–3% from the average constellation response depending on the method used for evaluation. Geometric assessment was also performed to study the positional accuracy and relative band-to-band (B2B) alignment of the image data sets. The position accuracy was assessed by comparing the RE imagery against high-resolution aerial imagery, while the B2B characterization was performed by registering each band against every other band to ensure that the proper band alignment is provided for an image product. The B2B results indicate that the internal alignments of these five RE bands are in agreement, with bands typically registered to within 0.25 pixels of each other or better.

  14. Surface chemistry and morphology in single particle optical imaging

    NASA Astrophysics Data System (ADS)

    Ekiz-Kanik, Fulya; Sevenler, Derin Deniz; Ünlü, Neşe Lortlar; Chiari, Marcella; Ünlü, M. Selim

    2017-05-01

    Biological nanoparticles such as viruses and exosomes are important biomarkers for a range of medical conditions, from infectious diseases to cancer. Biological sensors that detect whole viruses and exosomes with high specificity, yet without additional labeling, are promising because they reduce the complexity of sample preparation and may improve measurement quality by retaining information about nanoscale physical structure of the bio-nanoparticle (BNP). Towards this end, a variety of BNP biosensor technologies have been developed, several of which are capable of enumerating the precise number of detected viruses or exosomes and analyzing physical properties of each individual particle. Optical imaging techniques are promising candidates among broad range of label-free nanoparticle detectors. These imaging BNP sensors detect the binding of single nanoparticles on a flat surface functionalized with a specific capture molecule or an array of multiplexed capture probes. The functionalization step confers all molecular specificity for the sensor's target but can introduce an unforeseen problem; a rough and inhomogeneous surface coating can be a source of noise, as these sensors detect small local changes in optical refractive index. In this paper, we review several optical technologies for label-free BNP detectors with a focus on imaging systems. We compare the surface-imaging methods including dark-field, surface plasmon resonance imaging and interference reflectance imaging. We discuss the importance of ensuring consistently uniform and smooth surface coatings of capture molecules for these types of biosensors and finally summarize several methods that have been developed towards addressing this challenge.

  15. Standoff chemical D and Id with extended LWIR hyperspectral imaging spectroradiometer

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Lavoie, Hugo; Bouffard, François; Thériault, Jean-Marc; Vallieres, Christian; Roy, Claude; Dubé, Denis

    2013-05-01

    Standoff detection and identification (D and Id) of unknown volatile chemicals such as chemical pollutants and consequences of industrial incidents has been increasingly desired for first responders and for environmental monitoring. On site gas detection sensors are commercially available and several of them can even detect more than one chemical species, however only few of them have the capabilities of detecting a wide variety of gases at long and safe distances. The ABB Hyperspectral Imaging Spectroradiometer (MR-i), configured for gas detection detects and identifies a wide variety of chemical species including toxic industrial chemicals (TICs) and surrogates several kilometers away from the sensor. This configuration is called iCATSI for improved Compact Atmospheric Sounding Interferometer. iCATSI is a standoff passive system. The modularity of the MR-i platform allows optimization of the detection configuration with a 256 x 256 Focal Plane Array imager or a line scanning imager both covering the long wave IR atmospheric window up to 14 μm. The uniqueness of its extended LWIR cut off enables to detect more chemicals as well as provide higher probability of detection than usual LWIR sensors.

  16. Spatial noise in microdisplays for near-to-eye applications

    NASA Astrophysics Data System (ADS)

    Hastings, Arthur R., Jr.; Draper, Russell S.; Wood, Michael V.; Fellowes, David A.

    2011-06-01

    Spatial noise in imaging systems has been characterized and its impact on image quality metrics has been addressed primarily with respect to the introduction of this noise at the sensor component. However, sensor fixed pattern noise is not the only source of fixed pattern noise in an imaging system. Display fixed pattern noise cannot be easily mitigated in processing and, therefore, must be addressed. In this paper, a thorough examination of the amount and the effect of display fixed pattern noise is presented. The specific manifestation of display fixed pattern noise is dependent upon the display technology. Utilizing a calibrated camera, US Army RDECOM CERDEC NVESD has developed a microdisplay (μdisplay) spatial noise data collection capability. Noise and signal power spectra were used to characterize the display signal to noise ratio (SNR) as a function of spatial frequency analogous to the minimum resolvable temperature difference (MRTD) of a thermal sensor. The goal of this study is to establish a measurement technique to characterize μdisplay limiting performance to assist in proper imaging system specification.

  17. The application of Fresnel zone plate based projection in optofluidic microscopy.

    PubMed

    Wu, Jigang; Cui, Xiquan; Lee, Lap Man; Yang, Changhuei

    2008-09-29

    Optofluidic microscopy (OFM) is a novel technique for low-cost, high-resolution on-chip microscopy imaging. In this paper we report the use of the Fresnel zone plate (FZP) based projection in OFM as a cost-effective and compact means for projecting the transmission through an OFM's aperture array onto a sensor grid. We demonstrate this approach by employing a FZP (diameter = 255 microm, focal length = 800 microm) that has been patterned onto a glass slide to project the transmission from an array of apertures (diameter = 1 microm, separation = 10 microm) onto a CMOS sensor. We are able to resolve the contributions from 44 apertures on the sensor under the illumination from a HeNe laser (wavelength = 633 nm). The imaging quality of the FZP determines the effective field-of-view (related to the number of resolvable transmissions from apertures) but not the image resolution of such an OFM system--a key distinction from conventional microscope systems. We demonstrate the capability of the integrated system by flowing the protist Euglena gracilis across the aperture array microfluidically and performing OFM imaging of the samples.

  18. Review of infrared technology in The Netherlands

    NASA Astrophysics Data System (ADS)

    de Jong, Arie N.

    1993-11-01

    The use of infrared sensors in the Netherlands is substantial. Users can be found in a variety of disciplines, military as well as civil. This need for IR sensors implied a long history on IR technology and development. The result was a large technological-capability allowing the realization of IR hardware: specialized measuring equipment, engineering development models, prototype and production sensors for different applications. These applications range from small size, local radiometry up to large space-borne imaging. Large scale production of IR sensors has been realized for army vehicles. IR sensors have been introduced now in all of the armed forces. Facilities have been built to test the performance of these sensors. Models have been developed to predict the performance of a new sensor. A great effort has been spent on atmospheric research, leading to knowledge upon atmospheric- and background limitations of IR sensors.

  19. Information-based approach to performance estimation and requirements allocation in multisensor fusion for target recognition

    NASA Astrophysics Data System (ADS)

    Harney, Robert C.

    1997-03-01

    A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.

  20. Lightning Imaging Sensor (LIS) on the International Space Station (ISS): Launch, Installation, Activation and First Results

    NASA Technical Reports Server (NTRS)

    Blakeslee, R. J.; Christian, H. J.; Mach, D. M.; Buechler, D. E.; Wharton, N. A.; Stewart, M. F.; Ellett, W. T.; Koshak, W. J.; Walker, T. D.; Virts, K.; hide

    2017-01-01

    Mission: Fly a flight-spare LIS (Lightning Imaging Sensor) on ISS to take advantage of unique capabilities provided by the ISS (e.g., high inclination, real time data); Integrate LIS as a hosted payload on the DoD Space Test Program-Houston 5 (STP-H5) mission and launch on a Space X rocket for a minimum 2 year mission. Measurement: NASA and its partners developed and demonstrated effectiveness and value of using space-based lightning observations as a remote sensing tool; LIS measures lightning (amount, rate, radiant energy) with storm scale resolution, millisecond timing, and high detection efficiency, with no land-ocean bias. Benefit: LIS on ISS will extend TRMM (Tropical Rainfall Measuring Mission) time series observations, expand latitudinal coverage, provide real time data to operational users, and enable cross-sensor calibration.

  1. Wide-area littoral discreet observation: success at the tactical edge

    NASA Astrophysics Data System (ADS)

    Toth, Susan; Hughes, William; Ladas, Andrew

    2012-06-01

    In June 2011, the United States Army Research Laboratory (ARL) participated in Empire Challenge 2011 (EC-11). EC-11 was United States Joint Forces Command's (USJFCOM) annual live, joint and coalition intelligence, surveillance and reconnaissance (ISR) interoperability demonstration under the sponsorship of the Under Secretary of Defense for Intelligence (USD/I). EC-11 consisted of a series of ISR interoperability events, using a combination of modeling & simulation, laboratory and live-fly events. Wide-area Littoral Discreet Observation (WALDO) was ARL's maritime/littoral capability. WALDO met a USD(I) directive that EC-11 have a maritime component and WALDO was the primary player in the maritime scenario conducted at Camp Lejeune, North Carolina. The WALDO effort demonstrated the utility of a networked layered sensor array deployed in a maritime littoral environment, focusing on maritime surveillance targeting counter-drug, counter-piracy and suspect activity in a littoral or riverine environment. In addition to an embedded analytical capability, the sensor array and control infrastructure consisted of the Oriole acoustic sensor, iScout unattended ground sensor (UGS), OmniSense UGS, the Compact Radar and the Universal Distributed Management System (UDMS), which included the Proxy Skyraider, an optionally manned aircraft mounting both wide and narrow FOV EO/IR imaging sensors. The capability seeded a littoral area with riverine and unattended sensors in order to demonstrate the utility of a Wide Area Sensor (WAS) capability in a littoral environment focused on maritime surveillance activities. The sensors provided a cue for WAS placement/orbit. A narrow field of view sensor would be used to focus on more discreet activities within the WAS footprint. Additionally, the capability experimented with novel WAS orbits to determine if there are more optimal orbits for WAS collection in a littoral environment. The demonstration objectives for WALDO at EC-11 were: * Demonstrate a networked, layered, multi-modal sensor array deployed in a maritime littoral environment, focusing on maritime surveillance targeting counter-drug, counter-piracy and suspect activity * Assess the utility of a Wide Area Surveillance (WAS) sensor in a littoral environment focused on maritime surveillance activities * Demonstrate the effectiveness of using UGS sensors to cue WAS sensor tasking * Employ a narrow field of view full motion video (FMV) sensor package that is collocated with the WAS to conduct more discrete observation of potential items of interest when queued by near-real-time data from UGS or observers * Couple the ARL Oriole sensor with other modality UGS networks in a ground layer ISR capability, and incorporate data collected from aerial sensors with a GEOINT base layer to form a fused product * Swarm multiple aerial or naval platforms to prosecute single or multiple targets * Track fast moving surface vessels in littoral areas * Disseminate time sensitive, high value data to the users at the tactical edge In short we sought to answer the following question: how do you layer, control and display disparate sensors and sensor modalities in such a way as to facilitate appropriate sensor cross-cue, data integration, and analyst control to effectively monitor activity in a littoral (or novel) environment?

  2. UW Imaging of Seismic-Physical-Models in Air Using Fiber-Optic Fabry-Perot Interferometer.

    PubMed

    Rong, Qiangzhou; Hao, Yongxin; Zhou, Ruixiang; Yin, Xunli; Shao, Zhihua; Liang, Lei; Qiao, Xueguang

    2017-02-17

    A fiber-optic Fabry-Perot interferometer (FPI) has been proposed and demonstrated for the ultrasound wave (UW) imaging of seismic-physical models. The sensor probe comprises a single mode fiber (SMF) that is inserted into a ceramic tube terminated by an ultra-thin gold film. The probe performs with an excellent UW sensitivity thanks to the nanolayer gold film, and thus is capable of detecting a weak UW in air medium. Furthermore, the compact sensor is a symmetrical structure so that it presents a good directionality in the UW detection. The spectral band-side filter technique is used for UW interrogation. After scanning the models using the sensing probe in air, the two-dimensional (2D) images of four physical models are reconstructed.

  3. Near-infrared fluorescence goggle system with complementary metal–oxide–semiconductor imaging sensor and see-through display

    PubMed Central

    Liu, Yang; Njuguna, Raphael; Matthews, Thomas; Akers, Walter J.; Sudlow, Gail P.; Mondal, Suman; Tang, Rui

    2013-01-01

    Abstract. We have developed a near-infrared (NIR) fluorescence goggle system based on the complementary metal–oxide–semiconductor active pixel sensor imaging and see-through display technologies. The fluorescence goggle system is a compact wearable intraoperative fluorescence imaging and display system that can guide surgery in real time. The goggle is capable of detecting fluorescence of indocyanine green solution in the picomolar range. Aided by NIR quantum dots, we successfully used the fluorescence goggle to guide sentinel lymph node mapping in a rat model. We further demonstrated the feasibility of using the fluorescence goggle in guiding surgical resection of breast cancer metastases in the liver in conjunction with NIR fluorescent probes. These results illustrate the diverse potential use of the goggle system in surgical procedures. PMID:23728180

  4. New space sensor and mesoscale data analysis

    NASA Technical Reports Server (NTRS)

    Hickey, John S.

    1987-01-01

    The developed Earth Science and Application Division (ESAD) system/software provides the research scientist with the following capabilities: an extensive data base management capibility to convert various experiment data types into a standard format; and interactive analysis and display package (AVE80); an interactive imaging/color graphics capability utilizing the Apple III and IBM PC workstations integrated into the ESAD computer system; and local and remote smart-terminal capability which provides color video, graphics, and Laserjet output. Recommendations for updating and enhancing the performance of the ESAD computer system are listed.

  5. EOID System Model Validation, Metrics, and Synthetic Clutter Generation

    DTIC Science & Technology

    2003-09-30

    Our long-term goal is to accurately predict the capability of the current generation of laser-based underwater imaging sensors to perform Electro ... Optic Identification (EOID) against relevant targets in a variety of realistic environmental conditions. The models will predict the impact of

  6. Model based approach to UXO imaging using the time domain electromagnetic method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavely, E.M.

    1999-04-01

    Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detectormore » modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.« less

  7. Perspective: Advanced particle imaging

    PubMed Central

    Chandler, David W.

    2017-01-01

    Since the first ion imaging experiment [D. W. Chandler and P. L. Houston, J. Chem. Phys. 87, 1445–1447 (1987)], demonstrating the capability of collecting an image of the photofragments from a unimolecular dissociation event and analyzing that image to obtain the three-dimensional velocity distribution of the fragments, the efficacy and breadth of application of the ion imaging technique have continued to improve and grow. With the addition of velocity mapping, ion/electron centroiding, and slice imaging techniques, the versatility and velocity resolution have been unmatched. Recent improvements in molecular beam, laser, sensor, and computer technology are allowing even more advanced particle imaging experiments, and eventually we can expect multi-mass imaging with co-variance and full coincidence capability on a single shot basis with repetition rates in the kilohertz range. This progress should further enable “complete” experiments—the holy grail of molecular dynamics—where all quantum numbers of reactants and products of a bimolecular scattering event are fully determined and even under our control. PMID:28688442

  8. Advanced processing for high-bandwidth sensor systems

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.

    2000-11-01

    Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.

  9. In situ fluorescence imaging of localized corrosion with a pH-sensitive imaging fiber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panova, A.A.; Pantano, P.; Walt, D.R.

    1997-12-01

    A fiber optic pH-sensor capable of both visualizing corrosion sites and measuring local chemical concentrations is applied to real-time corrosion monitoring. The imaging fiber`s distal face containing an immobilized pH-sensitive fluorescent dye is brought into contact with metal surfaces submerged in aqueous buffers and fluorescence images are acquired as a function of time. The observed changes in fluorescence due to increases in pH at cathodic sites and decreases in pH at anodic sites are indicative of localized corrosion rates.

  10. Piezo-based, high dynamic range, wide bandwidth steering system for optical applications

    NASA Astrophysics Data System (ADS)

    Karasikov, Nir; Peled, Gal; Yasinov, Roman; Feinstein, Alan

    2017-05-01

    Piezoelectric motors and actuators are characterized by direct drive, fast response, high positioning resolution and high mechanical power density. These properties are beneficial for optical devices such as gimbals, optical image stabilizers and mirror angular positioners. The range of applications includes sensor pointing systems, image stabilization, laser steering and more. This paper reports on the construction, properties and operation of three types of piezo based building blocks for optical steering applications: a small gimbal and a two-axis OIS (Optical Image Stabilization) mechanism, both based on piezoelectric motors, and a flexure-assisted piezoelectric actuator for mirror angular positioning. The gimbal weighs less than 190 grams, has a wide angular span (solid angle of > 2π) and allows for a 80 micro-radian stabilization with a stabilization frequency up to 25 Hz. The OIS is an X-Y, closed loop, platform having a lateral positioning resolution better than 1 μm, a stabilization frequency up to 25 Hz and a travel of +/-2 mm. It is used for laser steering or positioning of the image sensor, based on signals from a MEMS Gyro sensor. The actuator mirror positioner is based on three piezoelectric actuation axes for tip tilt (each providing a 50 μm motion range), has a positioning resolution of 10 nm and is capable of a 1000 Hz response. A combination of the gimbal with the mirror positioner or the OIS stage is explored by simulations, indicating a <10 micro-radian stabilization capability under substantial perturbation. Simulations and experimental results are presented for a combined device facilitating both wide steering angle range and bandwidth.

  11. Static telescope aberration measurement using lucky imaging techniques

    NASA Astrophysics Data System (ADS)

    López-Marrero, Marcos; Rodríguez-Ramos, Luis Fernando; Marichal-Hernández, José Gil; Rodríguez-Ramos, José Manuel

    2012-07-01

    A procedure has been developed to compute static aberrations once the telescope PSF has been measured with the lucky imaging technique, using a nearby star close to the object of interest as the point source to probe the optical system. This PSF is iteratively turned into a phase map at the pupil using the Gerchberg-Saxton algorithm and then converted to the appropriate actuation information for a deformable mirror having low actuator number but large stroke capability. The main advantage of this procedure is related with the capability of correcting static aberration at the specific pointing direction and without the need of a wavefront sensor.

  12. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.

    PubMed

    Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K

    2014-07-07

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this detector compared to FPIs. Optical characterization, x-ray contrast measurements and theoretical DQE evaluation suggest that a trade off can be found between the need of a large imaging area and the requirement of a uniform imaging performance, making the DynAMITe large area CMOS APS suitable for a range of bio-medical applications.

  13. Surveillance and reconnaissance ground system architecture

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2001-12-01

    Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.

  14. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    PubMed

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  15. Optical Microresonators for Sensing and Transduction: A Materials Perspective.

    PubMed

    Heylman, Kevin D; Knapper, Kassandra A; Horak, Erik H; Rea, Morgan T; Vanga, Sudheer K; Goldsmith, Randall H

    2017-08-01

    Optical microresonators confine light to a particular microscale trajectory, are exquisitely sensitive to their microenvironment, and offer convenient readout of their optical properties. Taken together, this is an immensely attractive combination that makes optical microresonators highly effective as sensors and transducers. Meanwhile, advances in material science, fabrication techniques, and photonic sensing strategies endow optical microresonators with new functionalities, unique transduction mechanisms, and in some cases, unparalleled sensitivities. In this progress report, the operating principles of these sensors are reviewed, and different methods of signal transduction are evaluated. Examples are shown of how choice of materials must be suited to the analyte, and how innovations in fabrication and sensing are coupled together in a mutually reinforcing cycle. A tremendously broad range of capabilities of microresonator sensors is described, from electric and magnetic field sensing to mechanical sensing, from single-molecule detection to imaging and spectroscopy, from operation at high vacuum to in live cells. Emerging sensing capabilities are highlighted and put into context in the field. Future directions are imagined, where the diverse capabilities laid out are combined and advances in scalability and integration are implemented, leading to the creation of a sensor unparalleled in sensitivity and information content. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Luminescent sensing and imaging of oxygen: fierce competition to the Clark electrode.

    PubMed

    Wolfbeis, Otto S

    2015-08-01

    Luminescence-based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid-state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle-based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. © 2015 The Author. Bioessays published by WILEY Periodicals, Inc.

  17. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  18. Use of anomolous thermal imaging effects for multi-mode systems control during crystal growth

    NASA Technical Reports Server (NTRS)

    Wargo, Michael J.

    1989-01-01

    Real time image processing techniques, combined with multitasking computational capabilities are used to establish thermal imaging as a multimode sensor for systems control during crystal growth. Whereas certain regions of the high temperature scene are presently unusable for quantitative determination of temperature, the anomalous information thus obtained is found to serve as a potentially low noise source of other important systems control output. Using this approach, the light emission/reflection characteristics of the crystal, meniscus and melt system are used to infer the crystal diameter and a linear regression algorithm is employed to determine the local diameter trend. This data is utilized as input for closed loop control of crystal shape. No performance penalty in thermal imaging speed is paid for this added functionality. Approach to secondary (diameter) sensor design and systems control structure is discussed. Preliminary experimental results are presented.

  19. A sensitive optical micro-machined ultrasound sensor (OMUS) based on a silicon photonic ring resonator on an acoustical membrane.

    PubMed

    Leinders, S M; Westerveld, W J; Pozo, J; van Neer, P L M J; Snyder, B; O'Brien, P; Urbach, H P; de Jong, N; Verweij, M D

    2015-09-22

    With the increasing use of ultrasonography, especially in medical imaging, novel fabrication techniques together with novel sensor designs are needed to meet the requirements for future applications like three-dimensional intercardiac and intravascular imaging. These applications require arrays of many small elements to selectively record the sound waves coming from a certain direction. Here we present proof of concept of an optical micro-machined ultrasound sensor (OMUS) fabricated with a semi-industrial CMOS fabrication line. The sensor is based on integrated photonics, which allows for elements with small spatial footprint. We demonstrate that the first prototype is already capable of detecting pressures of 0.4 Pa, which matches the performance of the state of the art piezo-electric transducers while having a 65 times smaller spatial footprint. The sensor is compatible with MRI due to the lack of electronical wiring. Another important benefit of the use of integrated photonics is the easy interrogation of an array of elements. Hence, in future designs only two optical fibers are needed to interrogate an entire array, which minimizes the amount of connections of smart catheters. The demonstrated OMUS has potential applications in medical ultrasound imaging, non destructive testing as well as in flow sensing.

  20. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  1. SENSOR: a tool for the simulation of hyperspectral remote sensing systems

    NASA Astrophysics Data System (ADS)

    Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel

    The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.

  2. GOOSE: semantic search on internet connected sensors

    NASA Astrophysics Data System (ADS)

    Schutte, Klamer; Bomhof, Freek; Burghouts, Gertjan; van Diggelen, Jurriaan; Hiemstra, Peter; van't Hof, Jaap; Kraaij, Wessel; Pasman, Huib; Smith, Arthur; Versloot, Corne; de Wit, Joost

    2013-05-01

    More and more sensors are getting Internet connected. Examples are cameras on cell phones, CCTV cameras for traffic control as well as dedicated security and defense sensor systems. Due to the steadily increasing data volume, human exploitation of all this sensor data is impossible for effective mission execution. Smart access to all sensor data acts as enabler for questions such as "Is there a person behind this building" or "Alert me when a vehicle approaches". The GOOSE concept has the ambition to provide the capability to search semantically for any relevant information within "all" (including imaging) sensor streams in the entire Internet of sensors. This is similar to the capability provided by presently available Internet search engines which enable the retrieval of information on "all" web pages on the Internet. In line with current Internet search engines any indexing services shall be utilized cross-domain. The two main challenge for GOOSE is the Semantic Gap and Scalability. The GOOSE architecture consists of five elements: (1) an online extraction of primitives on each sensor stream; (2) an indexing and search mechanism for these primitives; (3) a ontology based semantic matching module; (4) a top-down hypothesis verification mechanism and (5) a controlling man-machine interface. This paper reports on the initial GOOSE demonstrator, which consists of the MES multimedia analysis platform and the CORTEX action recognition module. It also provides an outlook into future GOOSE development.

  3. Hybrid imaging: a quantum leap in scientific imaging

    NASA Astrophysics Data System (ADS)

    Atlas, Gene; Wadsworth, Mark V.

    2004-01-01

    ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.

  4. Advanced Sensors Boost Optical Communication, Imaging

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Brooklyn, New York-based Amplification Technologies Inc. (ATI), employed Phase I and II SBIR funding from NASA s Jet Propulsion Laboratory to forward the company's solid-state photomultiplier technology. Under the SBIR, ATI developed a small, energy-efficient, extremely high-gain sensor capable of detecting light down to single photons in the near infrared wavelength range. The company has commercialized this technology in the form of its NIRDAPD photomultiplier, ideal for use in free space optical communications, lidar and ladar, night vision goggles, and other light sensing applications.

  5. Laser speckle strain and deformation sensor using linear array image cross-correlation method for specifically arranged triple-beam triple-camera configuration

    NASA Technical Reports Server (NTRS)

    Sarrafzadeh-Khoee, Adel K. (Inventor)

    2000-01-01

    The invention provides a method of triple-beam and triple-sensor in a laser speckle strain/deformation measurement system. The triple-beam/triple-camera configuration combined with sequential timing of laser beam shutters is capable of providing indications of surface strain and structure deformations. The strain and deformation quantities, the four variables of surface strain, in-plane displacement, out-of-plane displacement and tilt, are determined in closed form solutions.

  6. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  7. Autonomous Sensors for Large Scale Data Collection

    NASA Astrophysics Data System (ADS)

    Noto, J.; Kerr, R.; Riccobono, J.; Kapali, S.; Migliozzi, M. A.; Goenka, C.

    2017-12-01

    Presented here is a novel implementation of a "Doppler imager" which remotely measures winds and temperatures of the neutral background atmosphere at ionospheric altitudes of 87-300Km and possibly above. Incorporating both recent optical manufacturing developments, modern network awareness and the application of machine learning techniques for intelligent self-monitoring and data classification. This system achieves cost savings in manufacturing, deployment and lifetime operating costs. Deployed in both ground and space-based modalities, this cost-disruptive technology will allow computer models of, ionospheric variability and other space weather models to operate with higher precision. Other sensors can be folded into the data collection and analysis architecture easily creating autonomous virtual observatories. A prototype version of this sensor has recently been deployed in Trivandrum India for the Indian Government. This Doppler imager is capable of operation, even within the restricted CubeSat environment. The CubeSat bus offers a very challenging environment, even for small instruments. The lack of SWaP and the challenging thermal environment demand development of a new generation of instruments; the Doppler imager presented is well suited to this environment. Concurrent with this CubeSat development is the development and construction of ground based arrays of inexpensive sensors using the proposed technology. This instrument could be flown inexpensively on one or more CubeSats to provide valuable data to space weather forecasters and ionospheric scientists. Arrays of magnetometers have been deployed for the last 20 years [Alabi, 2005]. Other examples of ground based arrays include an array of white-light all sky imagers (THEMIS) deployed across Canada [Donovan et al., 2006], oceans sensors on buoys [McPhaden et al., 2010], and arrays of seismic sensors [Schweitzer et al., 2002]. A comparable array of Doppler imagers can be constructed and deployed on the ground, to compliment the CubeSat data.

  8. Utility of BRDF Models for Estimating Optimal View Angles in Classification of Remotely Sensed Images

    NASA Technical Reports Server (NTRS)

    Valdez, P. F.; Donohoe, G. W.

    1997-01-01

    Statistical classification of remotely sensed images attempts to discriminate between surface cover types on the basis of the spectral response recorded by a sensor. It is well known that surfaces reflect incident radiation as a function of wavelength producing a spectral signature specific to the material under investigation. Multispectral and hyperspectral sensors sample the spectral response over tens and even hundreds of wavelength bands to capture the variation of spectral response with wavelength. Classification algorithms then exploit these differences in spectral response to distinguish between materials of interest. Sensors of this type, however, collect detailed spectral information from one direction (usually nadir); consequently, do not consider the directional nature of reflectance potentially detectable at different sensor view angles. Improvements in sensor technology have resulted in remote sensing platforms capable of detecting reflected energy across wavelengths (spectral signatures) and from multiple view angles (angular signatures) in the fore and aft directions. Sensors of this type include: the moderate resolution imaging spectroradiometer (MODIS), the multiangle imaging spectroradiometer (MISR), and the airborne solid-state array spectroradiometer (ASAS). A goal of this paper, then, is to explore the utility of Bidirectional Reflectance Distribution Function (BRDF) models in the selection of optimal view angles for the classification of remotely sensed images by employing a strategy of searching for the maximum difference between surface BRDFs. After a brief discussion of directional reflect ante in Section 2, attention is directed to the Beard-Maxwell BRDF model and its use in predicting the bidirectional reflectance of a surface. The selection of optimal viewing angles is addressed in Section 3, followed by conclusions and future work in Section 4.

  9. Satellite Ocean Color Sensor Design Concepts and Performance Requirements

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.; Meister, Gerhard; Monosmith, Bryan

    2014-01-01

    In late 1978, the National Aeronautics and Space Administration (NASA) launched the Nimbus-7 satellite with the Coastal Zone Color Scanner (CZCS) and several other sensors, all of which provided major advances in Earth remote sensing. The inspiration for the CZCS is usually attributed to an article in Science by Clarke et al. who demonstrated that large changes in open ocean spectral reflectance are correlated to chlorophyll-a concentrations. Chlorophyll-a is the primary photosynthetic pigment in green plants (marine and terrestrial) and is used in estimating primary production, i.e., the amount of carbon fixed into organic matter during photosynthesis. Thus, accurate estimates of global and regional primary production are key to studies of the earth's carbon cycle. Because the investigators used an airborne radiometer, they were able to demonstrate the increased radiance contribution of the atmosphere with altitude that would be a major issue for spaceborne measurements. Since 1978, there has been much progress in satellite ocean color remote sensing such that the technique is well established and is used for climate change science and routine operational environmental monitoring. Also, the science objectives and accompanying methodologies have expanded and evolved through a succession of global missions, e.g., the Ocean Color and Temperature Sensor (OCTS), the Seaviewing Wide Field-of-view Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Medium Resolution Imaging Spectrometer (MERIS), and the Global Imager (GLI). With each advance in science objectives, new and more stringent requirements for sensor capabilities (e.g., spectral coverage) and performance (e.g., signal-to-noise ratio, SNR) are established. The CZCS had four bands for chlorophyll and aerosol corrections. The Ocean Color Imager (OCI) recommended for the NASA Pre-Aerosol, Cloud, and Ocean Ecosystems (PACE) mission includes 5 nanometers hyperspectral coverage from 350 to 800 nanometers with three additional discrete near infrared (NIR) and shortwave infrared (SWIR) ocean aerosol correction bands. Also, to avoid drift in sensor sensitivity from being interpreted as environmental change, climate change research requires rigorous monitoring of sensor stability. For SeaWiFS, monthly lunar imaging accurately tracked stability at an accuracy of approximately 0.1% that allowed the data to be used for climate studies [2]. It is now acknowledged by the international community that future missions and sensor designs need to accommodate lunar calibrations. An overview of ocean color remote sensing and a review of the progress made in ocean color remote sensing and the variety of research applications derived from global satellite ocean color data are provided. The purpose of this chapter is to discuss the design options for ocean color satellite radiometers, performance and testing criteria, and sensor components (optics, detectors, electronics, etc.) that must be integrated into an instrument concept. These ultimately dictate the quality and quantity of data that can be delivered as a trade against mission cost. Historically, science and sensor technology have advanced in a "leap-frog" manner in that sensor design requirements for a mission are defined many years before a sensor is launched and by the end of the mission, perhaps 15-20 years later, science applications and requirements are well beyond the capabilities of the sensor. Section 3 provides a summary of historical mission science objectives and sensor requirements. This progression is expected to continue in the future as long as sensor costs can be constrained to affordable levels and still allow the incorporation of new technologies without incurring unacceptable risk to mission success. The IOCCG Report Number 13 discusses future ocean biology mission Level-1 requirements in depth.

  10. An Observation Capability Semantic-Associated Approach to the Selection of Remote Sensing Satellite Sensors: A Case Study of Flood Observations in the Jinsha River Basin.

    PubMed

    Hu, Chuli; Li, Jie; Lin, Xin; Chen, Nengcheng; Yang, Chao

    2018-05-21

    Observation schedules depend upon the accurate understanding of a single sensor’s observation capability and the interrelated observation capability information on multiple sensors. The general ontologies for sensors and observations are abundant. However, few observation capability ontologies for satellite sensors are available, and no study has described the dynamic associations among the observation capabilities of multiple sensors used for integrated observational planning. This limitation results in a failure to realize effective sensor selection. This paper develops a sensor observation capability association (SOCA) ontology model that is resolved around the task-sensor-observation capability (TSOC) ontology pattern. The pattern is developed considering the stimulus-sensor-observation (SSO) ontology design pattern, which focuses on facilitating sensor selection for one observation task. The core aim of the SOCA ontology model is to achieve an observation capability semantic association. A prototype system called SemOCAssociation was developed, and an experiment was conducted for flood observations in the Jinsha River basin in China. The results of this experiment verified that the SOCA ontology based association method can help sensor planners intuitively and accurately make evidence-based sensor selection decisions for a given flood observation task, which facilitates efficient and effective observational planning for flood satellite sensors.

  11. A multi-sensor land mine detection system: hardware and architectural outline of the Australian RRAMNS CTD system

    NASA Astrophysics Data System (ADS)

    Abeynayake, Canicious; Chant, Ian; Kempinger, Siegfried; Rye, Alan

    2005-06-01

    The Rapid Route Area and Mine Neutralisation System (RRAMNS) Capability Technology Demonstrator (CTD) is a countermine detection project undertaken by DSTO and supported by the Australian Defence Force (ADF). The limited time and budget for this CTD resulted in some difficult strategic decisions with regard to hardware selection and system architecture. Although the delivered system has certain limitations arising from its experimental status, many lessons have been learned which illustrate a pragmatic path for future development. RRAMNS a similar sensor suite to other systems, in that three complementary sensors are included. These are Ground Probing Radar, Metal Detector Array, and multi-band electro-optic sensors. However, RRAMNS uses a unique imaging system and a network based real-time control and sensor fusion architecture. The relatively simple integration of each of these components could be the basis for a robust and cost-effective operational system. The RRAMNS imaging system consists of three cameras which cover the visible spectrum, the mid-wave and long-wave infrared region. This subsystem can be used separately as a scouting sensor. This paper describes the system at its mid-2004 status, when full integration of all detection components was achieved.

  12. Few-photon color imaging using energy-dispersive superconducting transition-edge sensor spectrometry

    NASA Astrophysics Data System (ADS)

    Niwa, Kazuki; Numata, Takayuki; Hattori, Kaori; Fukuda, Daiji

    2017-04-01

    Highly sensitive spectral imaging is increasingly being demanded in bioanalysis research and industry to obtain the maximum information possible from molecules of different colors. We introduce an application of the superconducting transition-edge sensor (TES) technique to highly sensitive spectral imaging. A TES is an energy-dispersive photodetector that can distinguish the wavelength of each incident photon. Its effective spectral range is from the visible to the infrared (IR), up to 2800 nm, which is beyond the capabilities of other photodetectors. TES was employed in this study in a fiber-coupled optical scanning microscopy system, and a test sample of a three-color ink pattern was observed. A red-green-blue (RGB) image and a near-IR image were successfully obtained in the few-incident-photon regime, whereas only a black and white image could be obtained using a photomultiplier tube. Spectral data were also obtained from a selected focal area out of the entire image. The results of this study show that TES is feasible for use as an energy-dispersive photon-counting detector in spectral imaging applications.

  13. Few-photon color imaging using energy-dispersive superconducting transition-edge sensor spectrometry.

    PubMed

    Niwa, Kazuki; Numata, Takayuki; Hattori, Kaori; Fukuda, Daiji

    2017-04-04

    Highly sensitive spectral imaging is increasingly being demanded in bioanalysis research and industry to obtain the maximum information possible from molecules of different colors. We introduce an application of the superconducting transition-edge sensor (TES) technique to highly sensitive spectral imaging. A TES is an energy-dispersive photodetector that can distinguish the wavelength of each incident photon. Its effective spectral range is from the visible to the infrared (IR), up to 2800 nm, which is beyond the capabilities of other photodetectors. TES was employed in this study in a fiber-coupled optical scanning microscopy system, and a test sample of a three-color ink pattern was observed. A red-green-blue (RGB) image and a near-IR image were successfully obtained in the few-incident-photon regime, whereas only a black and white image could be obtained using a photomultiplier tube. Spectral data were also obtained from a selected focal area out of the entire image. The results of this study show that TES is feasible for use as an energy-dispersive photon-counting detector in spectral imaging applications.

  14. Ultrasonic imaging of material flaws exploiting multipath information

    NASA Astrophysics Data System (ADS)

    Shen, Xizhong; Zhang, Yimin D.; Demirli, Ramazan; Amin, Moeness G.

    2011-05-01

    In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn, enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles. Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated through experimental studies.

  15. Retinal fundus imaging with a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos

    2018-02-01

    Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.

  16. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  17. Toward More Accurate Iris Recognition Using Cross-Spectral Matching.

    PubMed

    Nalla, Pattabhi Ramaiah; Kumar, Ajay

    2017-01-01

    Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.

  18. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering

    PubMed Central

    Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro

    2017-01-01

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358

  19. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.

    PubMed

    Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru

    2017-11-09

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.

  20. Adaptive guidance and control for future remote sensing systems

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Myers, J. E.

    1980-01-01

    A unique approach to onboard processing was developed that is capable of acquiring high quality image data for users in near real time. The approach is divided into two steps: the development of an onboard cloud detection system; and the development of a landmark tracker. The results of these two developments are outlined and the requirements of an operational guidance and control system capable of providing continuous estimation of the sensor boresight position are summarized.

  1. Concepts, laboratory, and telescope test results of the plenoptic camera as a wavefront sensor

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Montilla, I.; Fernández-Valdivia, J. J.; Trujillo-Sevilla, J. L.; Rodríguez-Ramos, J. M.

    2012-07-01

    The plenoptic camera has been proposed as an alternative wavefront sensor adequate for extended objects within the context of the design of the European Solar Telescope (EST), but it can also be used with point sources. Originated in the field of the Electronic Photography, the plenoptic camera directly samples the Light Field function, which is the four - dimensional representation of all the light entering a camera. Image formation can then be seen as the result of the photography operator applied to this function, and many other features of the light field can be exploited to extract information of the scene, like depths computation to extract 3D imaging or, as it will be specifically addressed in this paper, wavefront sensing. The underlying concept of the plenoptic camera can be adapted to the case of a telescope by using a lenslet array of the same f-number placed at the focal plane, thus obtaining at the detector a set of pupil images corresponding to every sampled point of view. This approach will generate a generalization of Shack-Hartmann, Curvature and Pyramid wavefront sensors in the sense that all those could be considered particular cases of the plenoptic wavefront sensor, because the information needed as the starting point for those sensors can be derived from the plenoptic image. Laboratory results obtained with extended objects, phase plates and commercial interferometers, and even telescope observations using stars and the Moon as an extended object are presented in the paper, clearly showing the capability of the plenoptic camera to behave as a wavefront sensor.

  2. An Observation Capability Semantic-Associated Approach to the Selection of Remote Sensing Satellite Sensors: A Case Study of Flood Observations in the Jinsha River Basin

    PubMed Central

    Hu, Chuli; Li, Jie; Lin, Xin

    2018-01-01

    Observation schedules depend upon the accurate understanding of a single sensor’s observation capability and the interrelated observation capability information on multiple sensors. The general ontologies for sensors and observations are abundant. However, few observation capability ontologies for satellite sensors are available, and no study has described the dynamic associations among the observation capabilities of multiple sensors used for integrated observational planning. This limitation results in a failure to realize effective sensor selection. This paper develops a sensor observation capability association (SOCA) ontology model that is resolved around the task-sensor-observation capability (TSOC) ontology pattern. The pattern is developed considering the stimulus-sensor-observation (SSO) ontology design pattern, which focuses on facilitating sensor selection for one observation task. The core aim of the SOCA ontology model is to achieve an observation capability semantic association. A prototype system called SemOCAssociation was developed, and an experiment was conducted for flood observations in the Jinsha River basin in China. The results of this experiment verified that the SOCA ontology based association method can help sensor planners intuitively and accurately make evidence-based sensor selection decisions for a given flood observation task, which facilitates efficient and effective observational planning for flood satellite sensors. PMID:29883425

  3. A new omni-directional multi-camera system for high resolution surveillance

    NASA Astrophysics Data System (ADS)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  4. Validation of the Five-Phase Method for Simulating Complex Fenestration Systems with Radiance against Field Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.

    2016-08-29

    The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less

  5. Fuzzy Neural Classifiers for Multi-Wavelength Interdigital Sensors

    NASA Astrophysics Data System (ADS)

    Xenides, D.; Vlachos, D. S.; Simos, T. E.

    2007-12-01

    The use of multi-wavelength interdigital sensors for non-destructive testing is based on the capability of the measuring system to classify the measured impendence according to some physical properties of the material under test. By varying the measuring frequency and the wavelength of the sensor (and thus the penetration depth of the electric field inside the material under test) we can produce images that correspond to various configurations of dielectric materials under different geometries. The implementation of a fuzzy neural network witch inputs these images for both quantitative and qualitative sensing is demonstrated. The architecture of the system is presented with some references to the general theory of fuzzy sets and fuzzy calculus. Experimental results are presented in the case of a set of 8 well characterized dielectric layers. Finally the effect of network parameters to the functionality of the system is discussed, especially in the case of functions evaluating the fuzzy AND and OR operations.

  6. Analysis of the boreal forest-tundra ecotone: A test of AVIRIS capabilities in the Eastern Canadian subarctic

    NASA Technical Reports Server (NTRS)

    Goward, Samuel N.; Petzold, Donald E.

    1989-01-01

    A comparison was conducted between ground reflectance spectra collected in Schefferville, Canada and imaging spectrometer observations acquired by the AVIRIS sensor in a flight of the ER-2 Aircraft over the same region. The high spectral contrasts present in the Canadian Subarctic appeared to provide an effective test of the operational readiness of the AVIRIS sensor. Previous studies show that in this location various land cover materials possess a wide variety of visible/near infrared reflectance properties. Thus, this landscape served as an excellent test for the sensing variabilities of the newly developed AVIRIS sensor. An underlying hypothesis was that the unique visible/near infrared spectral reflectance patterns of Subarctic lichens could be detected from high altitudes by this advanced imaging spectrometer. The relation between lichen occurrence and boreal forest-tundra ecotone dynamics was investigated.

  7. Corrections to the MODIS Aqua Calibration Derived From MODIS Aqua Ocean Color Products

    NASA Technical Reports Server (NTRS)

    Meister, Gerhard; Franz, Bryan Alden

    2013-01-01

    Ocean color products such as, e.g., chlorophyll-a concentration, can be derived from the top-of-atmosphere radiances measured by imaging sensors on earth-orbiting satellites. There are currently three National Aeronautics and Space Administration sensors in orbit capable of providing ocean color products. One of these sensors is the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, whose ocean color products are currently the most widely used of the three. A recent improvement to the MODIS calibration methodology has used land targets to improve the calibration accuracy. This study evaluates the new calibration methodology and describes further calibration improvements that are built upon the new methodology by including ocean measurements in the form of global temporally averaged water-leaving reflectance measurements. The calibration improvements presented here mainly modify the calibration at the scan edges, taking advantage of the good performance of the land target trending in the center of the scan.

  8. Toward autonomous avian-inspired grasping for micro aerial vehicles.

    PubMed

    Thomas, Justin; Loianno, Giuseppe; Polin, Joseph; Sreenath, Koushil; Kumar, Vijay

    2014-06-01

    Micro aerial vehicles, particularly quadrotors, have been used in a wide range of applications. However, the literature on aerial manipulation and grasping is limited and the work is based on quasi-static models. In this paper, we draw inspiration from agile, fast-moving birds such as raptors, that are able to capture moving prey on the ground or in water, and develop similar capabilities for quadrotors. We address dynamic grasping, an approach to prehensile grasping in which the dynamics of the robot and its gripper are significant and must be explicitly modeled and controlled for successful execution. Dynamic grasping is relevant for fast pick-and-place operations, transportation and delivery of objects, and placing or retrieving sensors. We show how this capability can be realized (a) using a motion capture system and (b) without external sensors relying only on onboard sensors. In both cases we describe the dynamic model, and trajectory planning and control algorithms. In particular, we present a methodology for flying and grasping a cylindrical object using feedback from a monocular camera and an inertial measurement unit onboard the aerial robot. This is accomplished by mapping the dynamics of the quadrotor to a level virtual image plane, which in turn enables dynamically-feasible trajectory planning for image features in the image space, and a vision-based controller with guaranteed convergence properties. We also present experimental results obtained with a quadrotor equipped with an articulated gripper to illustrate both approaches.

  9. Novel EO/IR sensor technologies

    NASA Astrophysics Data System (ADS)

    Lewis, Keith

    2011-10-01

    The requirements for advanced EO/IR sensor technologies are discussed in the context of evolving military operations, with significant emphasis on the development of new sensing technologies to meet the challenges posed by asymmetric threats. The Electro-Magnetic Remote Sensing (EMRS DTC) was established in 2003 to provide a centre of excellence in sensor research and development, supporting new capabilities in key military areas such as precision attack, battlespace manoeuvre and information superiority. In the area of advanced electro-optic technology, the DTC has supported work on discriminative imaging, advanced detectors, laser components/technologies, and novel optical techniques. This paper provides a summary of some of the EO/IR technologies explored by the DTC.

  10. FPGA-based real time processing of the Plenoptic Wavefront Sensor

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ramos, L. F.; Marín, Y.; Díaz, J. J.; Piqueras, J.; García-Jiménez, J.; Rodríguez-Ramos, J. M.

    The plenoptic wavefront sensor combines measurements at pupil and image planes in order to obtain simultaneously wavefront information from different points of view, being capable to sample the volume above the telescope to extract the tomographic information of the atmospheric turbulence. The advantages of this sensor are presented elsewhere at this conference (José M. Rodríguez-Ramos et al). This paper will concentrate in the processing required for pupil plane phase recovery, and its computation in real time using FPGAs (Field Programmable Gate Arrays). This technology eases the implementation of massive parallel processing and allows tailoring the system to the requirements, maintaining flexibility, speed and cost figures.

  11. Reconfigurable Mobile System - Ground, sea and air applications

    NASA Astrophysics Data System (ADS)

    Lamonica, Gary L.; Sturges, James W.

    1990-11-01

    The Reconfigurable Mobile System (RMS) is a highly mobile data-processing unit for military users requiring real-time access to data gathered by airborne (and other) reconnaissance data. RMS combines high-performance computation and image processing workstations with resources for command/control/communications in a single, lightweight shelter. RMS is composed of off-the-shelf components, and is easily reconfigurable to land-vehicle or shipboard versions. Mission planning, which involves an airborne sensor platform's sensor coverage, considered aircraft/sensor capabilities in conjunction with weather, terrain, and threat scenarios. RMS's man-machine interface concept facilitates user familiarization and features iron-based function selection and windowing.

  12. High density Schottky barrier IRCCD sensors for SWIR applications at intermediate temperature

    NASA Technical Reports Server (NTRS)

    Elabd, H.; Villani, T. S.; Tower, J. R.

    1982-01-01

    Monolithic 32 x 64 and 64 x 1:128 palladium silicide (Pd2Si) interline transfer infrared charge coupled devices (IRCCDs) sensitive in the 1 to 3.5 micron spectral band were developed. This silicon imager exhibits a low response nonuniformity of typically 0.2 to 1.6% rms, and was operated in the temperature range between 40 to 140 K. Spectral response measurements of test Pd2Si p-type Si devices yield quantum efficiencies of 7.9% at 1.25 microns, 5.6% at 1.65 microns 2.2% at 2.22 microns. Improvement in quantum efficiency is expected by optimizing the different structural parameters of the Pd2Si detectors. The spectral response of the Pd2Si detectors fit a modified Fowler emission model. The measured photo-electric barrier height for the Pd2Si detectors is 0.34 eV and the measured quantum efficiency coefficient, C1, is 19%/eV. The dark current level of Pd2Si Schottky barrier focal plane arrays (FPAs) is sufficiently low to enable operation at intermediate temperatures at TV frame rates. Typical dark current level measured at 120 K on the FPA is 2 nA/sq cm. The operating temperature of the Pd2Si FPA is compatible with passive cooler performance. In addition, high density Pd2Si Schottky barrier FPAs are manufactured with high yield and therefore represent an economical approach to short wavelength IR imaging. A Pd2Si Schottky barrier image sensor for push-broom multispectral imaging in the 1.25, 1.65, and 2.22 micron bands is being studied. The sensor will have two line arrays (dual band capability) of 512 detectors each, with 30 micron center-to-center detector spacing. The device will be suitable for chip-to-chip abutment, thus providing the capability to produce large, multiple chip focal planes with contiguous, in-line sensors.

  13. Event-Based Tone Mapping for Asynchronous Time-Based Image Sensor

    PubMed Central

    Simon Chane, Camille; Ieng, Sio-Hoi; Posch, Christoph; Benosman, Ryad B.

    2016-01-01

    The asynchronous time-based neuromorphic image sensor ATIS is an array of autonomously operating pixels able to encode luminance information with an exceptionally high dynamic range (>143 dB). This paper introduces an event-based methodology to display data from this type of event-based imagers, taking into account the large dynamic range and high temporal accuracy that go beyond available mainstream display technologies. We introduce an event-based tone mapping methodology for asynchronously acquired time encoded gray-level data. A global and a local tone mapping operator are proposed. Both are designed to operate on a stream of incoming events rather than on time frame windows. Experimental results on real outdoor scenes are presented to evaluate the performance of the tone mapping operators in terms of quality, temporal stability, adaptation capability, and computational time. PMID:27642275

  14. Airborne measurements in the infrared using FTIR-based imaging hyperspectral sensors

    NASA Astrophysics Data System (ADS)

    Puckrin, E.; Turcotte, C. S.; Lahaie, P.; Dubé, D.; Lagueux, P.; Farley, V.; Marcotte, F.; Chamberland, M.

    2009-09-01

    Hyperspectral ground mapping is being used in an ever-increasing extent for numerous applications in the military, geology and environmental fields. The different regions of the electromagnetic spectrum help produce information of differing nature. The visible, near-infrared and short-wave infrared radiation (400 nm to 2.5 μm) has been mostly used to analyze reflected solar light, while the mid-wave (3 to 5 μm) and long-wave (8 to 12 μm or thermal) infrared senses the self-emission of molecules directly, enabling the acquisition of data during night time. Push-broom dispersive sensors have been typically used for airborne hyperspectral mapping. However, extending the spectral range towards the mid-wave and long-wave infrared brings performance limitations due to the self emission of the sensor itself. The Fourier-transform spectrometer technology has been extensively used in the infrared spectral range due to its high transmittance as well as throughput and multiplex advantages, thereby reducing the sensor self-emission problem. Telops has developed the Hyper-Cam, a rugged and compact infrared hyperspectral imager. The Hyper-Cam is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides passive signature measurement capability, with up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The Hyper-Cam has been used on the ground in several field campaigns, including the demonstration of standoff chemical agent detection. More recently, the Hyper-Cam has been integrated into an airplane to provide airborne measurement capabilities. A special pointing module was designed to compensate for airplane attitude and forward motion. To our knowledge, the Hyper-Cam is the first commercial airborne hyperspectral imaging sensor based on Fourier-transform infrared technology. The first airborne measurements and some preliminary performance criteria for the Hyper-Cam are presented in this paper.

  15. Airborne measurements in the infrared using FTIR-based imaging hyperspectral sensors

    NASA Astrophysics Data System (ADS)

    Puckrin, E.; Turcotte, C. S.; Lahaie, P.; Dubé, D.; Farley, V.; Lagueux, P.; Marcotte, F.; Chamberland, M.

    2009-05-01

    Hyperspectral ground mapping is being used in an ever-increasing extent for numerous applications in the military, geology and environmental fields. The different regions of the electromagnetic spectrum help produce information of differing nature. The visible, near-infrared and short-wave infrared radiation (400 nm to 2.5 μm) has been mostly used to analyze reflected solar light, while the mid-wave (3 to 5 μm) and long-wave (8 to 12 μm or thermal) infrared senses the self-emission of molecules directly, enabling the acquisition of data during night time. Push-broom dispersive sensors have been typically used for airborne hyperspectral mapping. However, extending the spectral range towards the mid-wave and long-wave infrared brings performance limitations due to the self emission of the sensor itself. The Fourier-transform spectrometer technology has been extensively used in the infrared spectral range due to its high transmittance as well as throughput and multiplex advantages, thereby reducing the sensor self-emission problem. Telops has developed the Hyper-Cam, a rugged and compact infrared hyperspectral imager. The Hyper-Cam is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides passive signature measurement capability, with up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The Hyper-Cam has been used on the ground in several field campaigns, including the demonstration of standoff chemical agent detection. More recently, the Hyper-Cam has been integrated into an airplane to provide airborne measurement capabilities. A special pointing module was designed to compensate for airplane attitude and forward motion. To our knowledge, the Hyper-Cam is the first commercial airborne hyperspectral imaging sensor based on Fourier-transform infrared technology. The first airborne measurements and some preliminary performance criteria for the Hyper-Cam are presented in this paper.

  16. Tera-Ops Processing for ATR

    NASA Technical Reports Server (NTRS)

    Udomkesmalee, Suraphol; Padgett, Curtis; Zhu, David; Lung, Gerald; Howard, Ayanna

    2000-01-01

    A three-dimensional microelectronic device (3DANN-R) capable of performing general image convolution at the speed of 1012 operations/second (ops) in a volume of less than 1.5 cubic centimeter has been successfully built under the BMDO/JPL VIGILANTE program. 3DANN-R was developed in partnership with Irvine Sensors Corp., Costa Mesa, California. 3DANN-R is a sugar-cube-sized, low power image convolution engine that in its core computation circuitry is capable of performing 64 image convolutions with large (64x64) windows at video frame rates. This paper explores potential applications of 3DANN-R such as target recognition, SAR and hyperspectral data processing, and general machine vision using real data and discuss technical challenges for providing deployable systems for BMDO surveillance and interceptor programs.

  17. Foveated optics

    NASA Astrophysics Data System (ADS)

    Bryant, Kyle R.

    2016-05-01

    Foveated imaging can deliver two different resolutions on a single focal plane, which might inexpensively allow more capability for military systems. The following design study results provide starting examples, lessons learned, and helpful setup equations and pointers to aid the lens designer in any foveated lens design effort. Our goal is to put robust sensor in a small package with no moving parts, but still be able to perform some of the functions of a sensor in a moving gimbal. All of the elegant solutions are out (for various reasons). This study is an attempt to see if lens designs can solve this problem and realize some gains in performance versus cost for airborne sensors. We determined a series of design concepts to simultaneously deliver wide field of view and high foveal resolution without scanning or gimbals. Separate sensors for each field of view are easy and relatively inexpensive, but lead to bulky detectors and electronics. Folding and beam-combining of separate optical channels reduces sensor footprint, but induces image inversions and reduced transmission. Entirely common optics provide good resolution, but cannot provide a significant magnification increase in the foveal region. Offsetting the foveal region from the wide field center may not be physically realizable, but may be required for some applications. The design study revealed good general guidance for foveated optics designs with a cold stop. Key lessons learned involve managing distortion, telecentric imagers, matching image inversions and numerical apertures between channels, reimaging lenses, and creating clean resolution zone splits near internal focal planes.

  18. Cell phones as imaging sensors

    NASA Astrophysics Data System (ADS)

    Bhatti, Nina; Baker, Harlyn; Marguier, Joanna; Berclaz, Jérôme; Süsstrunk, Sabine

    2010-04-01

    Camera phones are ubiquitous, and consumers have been adopting them faster than any other technology in modern history. When connected to a network, though, they are capable of more than just picture taking: Suddenly, they gain access to the power of the cloud. We exploit this capability by providing a series of image-based personal advisory services. These are designed to work with any handset over any cellular carrier using commonly available Multimedia Messaging Service (MMS) and Short Message Service (SMS) features. Targeted at the unsophisticated consumer, these applications must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system (i.e., as a cloud service) and not on the handset itself. Presenting an image to an advisory service in the cloud, a user receives information that can be acted upon immediately. Two of our examples involve color assessment - selecting cosmetics and home décor paint palettes; the third provides the ability to extract text from a scene. In the case of the color imaging applications, we have shown that our service rivals the advice quality of experts. The result of this capability is a new paradigm for mobile interactions - image-based information services exploiting the ubiquity of camera phones.

  19. Multi-dimensional position sensor using range detectors

    DOEpatents

    Vann, Charles S.

    2000-01-01

    A small, non-contact optical sensor uses ranges and images to detect its relative position to an object in up to six degrees of freedom. The sensor has three light emitting range detectors which illuminate a target and can be used to determine distance and two tilt angles. A camera located between the three range detectors senses the three remaining degrees of freedom, two translations and one rotation. Various range detectors, with different light sources, e.g. lasers and LEDs, different collection options, and different detection schemes, e.g. diminishing return and time of flight can be used. This sensor increases the capability and flexibility of computer controlled machines, e.g. it can instruct a robot how to adjust automatically to different positions and orientations of a part.

  20. Real time monitoring of progressive damage during loading of a simplified total hip stem construct using embedded acoustic emission sensors.

    PubMed

    Mavrogordato, Mark; Taylor, Mark; Taylor, Andrew; Browne, Martin

    2011-05-01

    Acoustic emission (AE) is a non-destructive technique that is capable of passively monitoring failure of a construct with excellent temporal resolution. Previous investigations using AE to monitor the integrity of a total hip replacement (THR) have used surface mounted sensors; however, the AE signal attenuates as it travels through materials and across interfaces. This study proposes that directly embedded sensors within the femoral stem of the implant will reduce signal attenuation effects and eliminate potential complications and variability associated with fixing the sensor to the sample. Data was collected during in vitro testing of implanted constructs, and information from both embedded and externally mounted AE sensors was compared and corroborated by micro-Computed Tomography (micro-CT) images taken before and after testing. The results of this study indicate that the embedded sensors gave a closer corroboration to observed damage using micro-CT and were less affected by unwanted noise sources. This has significant implications for the use of AE in assessing the state of THR constructs in vitro and it is hypothesised that directly embedded AE sensors may provide the first steps towards an in vivo, cost effective, user friendly, non-destructive system capable of continuously monitoring the condition of the implanted construct. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. Novel approach for low-cost muzzle flash detection system

    NASA Astrophysics Data System (ADS)

    Voskoboinik, Asher

    2008-04-01

    A low-cost muzzle flash detection based on CMOS sensor technology is proposed. This low-cost technology makes it possible to detect various transient events with characteristic times between dozens of microseconds up to dozens of milliseconds while sophisticated algorithms successfully separate them from false alarms by utilizing differences in geometrical characteristics and/or temporal signatures. The proposed system consists of off-the-shelf smart CMOS cameras with built-in signal and image processing capabilities for pre-processing together with allocated memory for storing a buffer of images for further post-processing. Such a sensor does not require sending giant amounts of raw data to a real-time processing unit but provides all calculations in-situ where processing results are the output of the sensor. This patented CMOS muzzle flash detection concept exhibits high-performance detection capability with very low false-alarm rates. It was found that most false-alarms due to sun glints are from sources at distances of 500-700 meters from the sensor and can be distinguished by time examination techniques from muzzle flash signals. This will enable to eliminate up to 80% of falsealarms due to sun specular reflections in the battle field. Additional effort to distinguish sun glints from suspected muzzle flash signal is made by optimization of the spectral band in Near-IR region. The proposed system can be used for muzzle detection of small arms, missiles and rockets and other military applications.

  2. A digital ISO expansion technique for digital cameras

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong

    2010-01-01

    Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.

  3. Radiation imaging with optically read out GEM-based detectors

    NASA Astrophysics Data System (ADS)

    Brunbauer, F. M.; Lupberger, M.; Oliveri, E.; Resnati, F.; Ropelewski, L.; Streli, C.; Thuiner, P.; van Stenis, M.

    2018-02-01

    Modern imaging sensors allow for high granularity optical readout of radiation detectors such as MicroPattern Gaseous Detectors (MPGDs). Taking advantage of the high signal amplification factors achievable by MPGD technologies such as Gaseous Electron Multipliers (GEMs), highly sensitive detectors can be realised and employing gas mixtures with strong scintillation yield in the visible wavelength regime, optical readout of such detectors can provide high-resolution event representations. Applications from X-ray imaging to fluoroscopy and tomography profit from the good spatial resolution of optical readout and the possibility to obtain images without the need for extensive reconstruction. Sensitivity to low-energy X-rays and energy resolution permit energy resolved imaging and material distinction in X-ray fluorescence measurements. Additionally, the low material budget of gaseous detectors and the possibility to couple scintillation light to imaging sensors via fibres or mirrors makes optically read out GEMs an ideal candidate for beam monitoring detectors in high energy physics as well as radiotherapy. We present applications and achievements of optically read out GEM-based detectors including high spatial resolution imaging and X-ray fluorescence measurements as an alternative readout approach for MPGDs. A detector concept for low intensity applications such as X-ray crystallography, which maximises detection efficiency with a thick conversion region but mitigates parallax-induced broadening is presented and beam monitoring capabilities of optical readout are explored. Augmenting high resolution 2D projections of particle tracks obtained with optical readout with timing information from fast photon detectors or transparent anodes for charge readout, 3D reconstruction of particle trajectories can be performed and permits the realisation of optically read out time projection chambers. Combining readily available high performance imaging sensors with compatible scintillating gases and the strong signal amplification factors achieved by MPGDs makes optical readout an attractive alternative to the common concept of electronic readout of radiation detectors. Outstanding signal-to-noise ratios and robustness against electronic noise allow unprecedented imaging capabilities for various applications in fields ranging from high energy physics to medical instrumentation.

  4. The NASA Airborne Earth Science Microwave Imaging Radiometer (AESMIR): A New Sensor for Earth Remote Sensing

    NASA Technical Reports Server (NTRS)

    Kim, Edward

    2003-01-01

    The Airborne Earth Science Microwave Imaging Radiometer (AESMIR) is a versatile new airborne imaging radiometer recently developed by NASA. The AESMIR design is unique in that it performs dual-polarized imaging at all standard passive microwave frequency bands (6-89 GHz) using only one sensor headscanner package, providing an efficient solution for Earth remote sensing applications (snow, soil moisture/land parameters, precipitation, ocean winds, sea surface temperature, water vapor, sea ice, etc.). The microwave radiometers themselves will incorporate state-of-the-art receivers, with particular attention given to instrument calibration for the best possible accuracy and sensitivity. The single-package design of AESMIR makes it compatible with high-altitude aircraft platforms such as the NASA ER-2s. The arbitrary 2-axis gimbal can perform conical and cross-track scanning, as well as fixed-beam staring. This compatibility with high-altitude platforms coupled with the flexible scanning configuration, opens up previously unavailable science opportunities for convection/precip/cloud science and co-flying with complementary instruments, as well as providing wider swath coverage for all science applications. By designing AESMIR to be compatible with these high-altitude platforms, we are also compatible with the NASA P-3, the NASA DC-8, C-130s and ground-based deployments. Thus AESMIR can provide low-, mid-, and high- altitude microwave imaging. Parallel filter banks allow AESMIR to simultaneously simulate the exact passbands of multiple satellite radiometers: SSM/I, TMI, AMSR, Windsat, SSMI/S, and the upcoming GPM/GMI and NPOESS/CMIS instruments --a unique capability among aircraft radiometers. An L-band option is also under development, again using the same scanner. With this option, simultaneous imaging from 1.4 to 89 GHz will be feasible. And, all receivers except the sounding channels will be configured for 4-Stokes polarimetric operation using high-speed digital correlators in the near future. The capabilities and unique design features of this new sensor will be described, and example imagery will be presented.

  5. Electro-optical imaging systems integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, R.

    1987-01-01

    Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less

  6. Hyperspectral Systems Increase Imaging Capabilities

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In 1983, NASA started developing hyperspectral systems to image in the ultraviolet and infrared wavelengths. In 2001, the first on-orbit hyperspectral imager, Hyperion, was launched aboard the Earth Observing-1 spacecraft. Based on the hyperspectral imaging sensors used in Earth observation satellites, Stennis Space Center engineers and Institute for Technology Development researchers collaborated on a new design that was smaller and used an improved scanner. Featured in Spinoff 2007, the technology is now exclusively licensed by Themis Vision Systems LLC, of Richmond, Virginia, and is widely used in medical and life sciences, defense and security, forensics, and microscopy.

  7. Infrared imagery acquisition process supporting simulation and real image training

    NASA Astrophysics Data System (ADS)

    O'Connor, John

    2012-05-01

    The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.

  8. IRLooK: an advanced mobile infrared signature measurement, data reduction, and analysis system

    NASA Astrophysics Data System (ADS)

    Cukur, Tamer; Altug, Yelda; Uzunoglu, Cihan; Kilic, Kayhan; Emir, Erdem

    2007-04-01

    Infrared signature measurement capability has a key role in the electronic warfare (EW) self protection systems' development activities. In this article, the IRLooK System and its capabilities will be introduced. IRLooK is a truly innovative mobile infrared signature measurement system with all its design, manufacturing and integration accomplished by an engineering philosophy peculiar to ASELSAN. IRLooK measures the infrared signatures of military and civil platforms such as fixed/rotary wing aircrafts, tracked/wheeled vehicles and navy vessels. IRLooK has the capabilities of data acquisition, pre-processing, post-processing, analysis, storing and archiving over shortwave, mid-wave and long wave infrared spectrum by means of its high resolution radiometric sensors and highly sophisticated software analysis tools. The sensor suite of IRLooK System includes imaging and non-imaging radiometers and a spectroradiometer. Single or simultaneous multiple in-band measurements as well as high radiant intensity measurements can be performed. The system provides detailed information on the spectral, spatial and temporal infrared signature characteristics of the targets. It also determines IR Decoy characteristics. The system is equipped with a high quality field proven two-axes tracking mount to facilitate target tracking. Manual or automatic tracking is achieved by using a passive imaging tracker. The system also includes a high quality weather station and field-calibration equipment including cavity and extended area blackbodies. The units composing the system are mounted on flat-bed trailers and the complete system is designed to be transportable by large body aircraft.

  9. Ground-Based Measurement Experiment and First Results with Geosynchronous-Imaging Fourier Transform Spectrometer Engineering Demonstration Unit

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L.; Bingham, Gail E.; Huppi, Ronald J.; Revercomb, Henry E.; Zollinger, Lori J.; Larar, Allen M.; Liu, Xu; Tansock, Joseph J.; Reisse, Robert A.; hide

    2007-01-01

    The geosynchronous-imaging Fourier transform spectrometer (GIFTS) engineering demonstration unit (EDU) is an imaging infrared spectrometer designed for atmospheric soundings. It measures the infrared spectrum in two spectral bands (14.6 to 8.8 microns, 6.0 to 4.4 microns) using two 128 x 128 detector arrays with a spectral resolution of 0.57 cm(exp -1) with a scan duration of approximately 11 seconds. From a geosynchronous orbit, the instrument will have the capability of taking successive measurements of such data to scan desired regions of the globe, from which atmospheric status, cloud parameters, wind field profiles, and other derived products can be retrieved. The GIFTS EDU provides a flexible and accurate testbed for the new challenges of the emerging hyperspectral era. The EDU ground-based measurement experiment, held in Logan, Utah during September 2006, demonstrated its extensive capabilities and potential for geosynchronous and other applications (e.g., Earth observing environmental measurements). This paper addresses the experiment objectives and overall performance of the sensor system with a focus on the GIFTS EDU imaging capability and proof of the GIFTS measurement concept.

  10. Multi-energy x-ray imaging and sensing for diagnostic and control of the burning plasma.

    PubMed

    Stutman, D; Tritz, K; Finkenthal, M

    2012-10-01

    New diagnostic and sensor designs are needed for future burning plasma (BP) fusion experiments, having good space and time resolution and capable of prolonged operation in the harsh BP environment. We evaluate the potential of multi-energy x-ray imaging with filtered detector arrays for BP diagnostic and control. Experimental studies show that this simple and robust technique enables measuring with good accuracy, speed, and spatial resolution the T(e) profile, impurity content, and MHD activity in a tokamak. Applied to the BP this diagnostic could also serve for non-magnetic sensing of the plasma position, centroid, ELM, and RWM instability. BP compatible x-ray sensors are proposed using "optical array" or "bi-cell" detectors.

  11. Census Cities Project and Atlas of Urban and Regional Change

    NASA Technical Reports Server (NTRS)

    Wray, J. R.

    1971-01-01

    The Census Cities Project has several related purposes: (1) to assess the role of remote sensors on high altitude platforms for the comparative study of urban areas; (2) to detect changes in selected U.S. urban areas between the 1970 census and the time of launching of an earth-orbiting sensor platform prior to the next census; (3) to test the utility of the satellite sensor platform to monitor urban change (When the 1970 census returns become available for small areas, they will serve as a control for sensor image interpretation.); (4) to design an information system for incorporating graphic sensor data with census-type data gathered by traditional techniques; (5) to identify and design user-oriented end-products or information services; and (6) to plan an effective organizational capability to provide such services on a continuing basis.

  12. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  13. An Airborne Conical Scanning Millimeter-Wave Imaging Radiometer (CoSMIR)

    NASA Technical Reports Server (NTRS)

    Piepmeier, J.; Racette, P.; Wang, J.; Crites, A.; Doiron, T.; Engler, C.; Lecha, J.; Powers, M.; Simon, E.; Triesky, M.; hide

    2001-01-01

    An airborne Conical Scanning Millimeter-wave Imaging Radiometer (CoSMIR) for high-altitude observations from the NASA Research Aircraft (ER-2) is discussed. The primary application of the CoSMIR is water vapor profile remote sensing. Four radiometers operating at 50 (three channels), 92, 150, and 183 (three channels) GHz provide spectral coverage identical to nine of the Special Sensor Microwave Imager/Sounder (SSMIS) high-frequency channels. Constant polarization-basis conical and cross-track scanning capabilities are achieved using an elevation-under-azimuth two-axis gimbals.

  14. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  15. Optical polarization: background and camouflage

    NASA Astrophysics Data System (ADS)

    Škerlind, Christina; Hallberg, Tomas; Eriksson, Johan; Kariis, Hans; Bergström, David

    2017-10-01

    Polarimetric imaging sensors in the electro-optical region, already military and commercially available in both the visual and infrared, show enhanced capabilities for advanced target detection and recognition. The capabilities arise due to the ability to discriminate between man-made and natural background surfaces using the polarization information of light. In the development of materials for signature management in the visible and infrared wavelength regions, different criteria need to be met to fulfil the requirements for a good camouflage against modern sensors. In conventional camouflage design, the aimed design of the surface properties of an object is to spectrally match or adapt it to a background and thereby minimizing the contrast given by a specific threat sensor. Examples will be shown from measurements of some relevant materials and how they in different ways affect the polarimetric signature. Dimensioning properties relevant in an optical camouflage from a polarimetric perspective, such as degree of polarization, the viewing or incident angle, and amount of diffuse reflection, mainly in the infrared region, will be discussed.

  16. Initial design and performance of the near surface unmanned aircraft system sensor suite in support of the GOES-R field campaign

    NASA Astrophysics Data System (ADS)

    Pearlman, Aaron J.; Padula, Francis; Shao, Xi; Cao, Changyong; Goodman, Steven J.

    2016-09-01

    One of the main objectives of the Geostationary Operational Environmental Satellite R-Series (GOES-R) field campaign is to validate the SI traceability of the Advanced Baseline Imager. The campaign plans include a feasibility demonstration study for new near surface unmanned aircraft system (UAS) measurement capability that is being developed to meet the challenges of validating geostationary sensors. We report our progress in developing our initial systems by presenting the design and preliminary characterization results of the sensor suite. The design takes advantage of off-the-shelf technologies and fiber-based optical components to make hemispheric directional measurements from a UAS. The characterization results - including laboratory measurements of temperature effects and polarization sensitivity - are used to refine the radiometric uncertainty budget towards meeting the validation objectives for the campaign. These systems will foster improved validation capabilities for the GOES-R field campaign and other next generation satellite systems.

  17. Multi-Sensor Mud Detection

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Matthies, Larry H.

    2010-01-01

    Robust mud detection is a critical perception requirement for Unmanned Ground Vehicle (UGV) autonomous offroad navigation. A military UGV stuck in a mud body during a mission may have to be sacrificed or rescued, both of which are unattractive options. There are several characteristics of mud that may be detectable with appropriate UGV-mounted sensors. For example, mud only occurs on the ground surface, is cooler than surrounding dry soil during the daytime under nominal weather conditions, is generally darker than surrounding dry soil in visible imagery, and is highly polarized. However, none of these cues are definitive on their own. Dry soil also occurs on the ground surface, shadows, snow, ice, and water can also be cooler than surrounding dry soil, shadows are also darker than surrounding dry soil in visible imagery, and cars, water, and some vegetation are also highly polarized. Shadows, snow, ice, water, cars, and vegetation can all be disambiguated from mud by using a suite of sensors that span multiple bands in the electromagnetic spectrum. Because there are military operations when it is imperative for UGV's to operate without emitting strong, detectable electromagnetic signals, passive sensors are desirable. JPL has developed a daytime mud detection capability using multiple passive imaging sensors. Cues for mud from multiple passive imaging sensors are fused into a single mud detection image using a rule base, and the resultant mud detection is localized in a terrain map using range data generated from a stereo pair of color cameras.

  18. Designing and testing the coronagraphic Modal Wavefront Sensor: a fast non-common path error sensor for high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Wilby, M. J.; Keller, C. U.; Haffert, S.; Korkiakoski, V.; Snik, F.; Pietrow, A. G. M.

    2016-07-01

    Non-Common Path Errors (NCPEs) are the dominant factor limiting the performance of current astronomical high-contrast imaging instruments. If uncorrected, the resulting quasi-static speckle noise floor limits coronagraph performance to a raw contrast of typically 10-4, a value which does not improve with increasing integration time. The coronagraphic Modal Wavefront Sensor (cMWS) is a hybrid phase optic which uses holographic PSF copies to supply focal-plane wavefront sensing information directly from the science camera, whilst maintaining a bias-free coronagraphic PSF. This concept has already been successfully implemented on-sky at the William Herschel Telescope (WHT), La Palma, demonstrating both real-time wavefront sensing capability and successful extraction of slowly varying wavefront errors under a dominant and rapidly changing atmospheric speckle foreground. In this work we present an overview of the development of the cMWS and recent first light results obtained using the Leiden EXoplanet Instrument (LEXI), a high-contrast imager and high-dispersion spectrograph pathfinder instrument for the WHT.

  19. Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor.

    PubMed

    Griffiths, J A; Chen, D; Turchetta, R; Royle, G J

    2011-03-01

    An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 10(3) to 1.6 at a gain of 3.93 × 10(1). The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 10(3), dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.

  20. Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor

    NASA Astrophysics Data System (ADS)

    Griffiths, J. A.; Chen, D.; Turchetta, R.; Royle, G. J.

    2011-03-01

    An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 103 to 1.6 at a gain of 3.93 × 101. The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 103, dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.

  1. Microscopic resolution broadband dielectric spectroscopy

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.; Watson, P.; Prance, R. J.

    2011-08-01

    Results are presented for a non-contact measurement system capable of micron level spatial resolution. It utilises the novel electric potential sensor (EPS) technology, invented at Sussex, to image the electric field above a simple composite dielectric material. EP sensors may be regarded as analogous to a magnetometer and require no adjustments or offsets during either setup or use. The sample consists of a standard glass/epoxy FR4 circuit board, with linear defects machined into the surface by a PCB milling machine. The sample is excited with an a.c. signal over a range of frequencies from 10 kHz to 10 MHz, from the reverse side, by placing it on a conducting sheet connected to the source. The single sensor is raster scanned over the surface at a constant working distance, consistent with the spatial resolution, in order to build up an image of the electric field, with respect to the reference potential. The results demonstrate that both the surface defects and the internal dielectric variations within the composite may be imaged in this way, with good contrast being observed between the glass mat and the epoxy resin.

  2. Focal-Plane Sensing-Processing: A Power-Efficient Approach for the Implementation of Privacy-Aware Networked Visual Sensors

    PubMed Central

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-01-01

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects. PMID:25195849

  3. Focal-plane sensing-processing: a power-efficient approach for the implementation of privacy-aware networked visual sensors.

    PubMed

    Fernández-Berni, Jorge; Carmona-Galán, Ricardo; del Río, Rocío; Kleihorst, Richard; Philips, Wilfried; Rodríguez-Vázquez, Ángel

    2014-08-19

    The capture, processing and distribution of visual information is one of the major challenges for the paradigm of the Internet of Things. Privacy emerges as a fundamental barrier to overcome. The idea of networked image sensors pervasively collecting data generates social rejection in the face of sensitive information being tampered by hackers or misused by legitimate users. Power consumption also constitutes a crucial aspect. Images contain a massive amount of data to be processed under strict timing requirements, demanding high-performance vision systems. In this paper, we describe a hardware-based strategy to concurrently address these two key issues. By conveying processing capabilities to the focal plane in addition to sensing, we can implement privacy protection measures just at the point where sensitive data are generated. Furthermore, such measures can be tailored for efficiently reducing the computational load of subsequent processing stages. As a proof of concept, a full-custom QVGA vision sensor chip is presented. It incorporates a mixed-signal focal-plane sensing-processing array providing programmable pixelation of multiple image regions in parallel. In addition to this functionality, the sensor exploits reconfigurability to implement other processing primitives, namely block-wise dynamic range adaptation, integral image computation and multi-resolution filtering. The proposed circuitry is also suitable to build a granular space, becoming the raw material for subsequent feature extraction and recognition of categorized objects.

  4. High Density Schottky Barrier Infrared Charge-Coupled Device (IRCCD) Sensors For Short Wavelength Infrared (SWIR) Applications At Intermediate Temperature

    NASA Astrophysics Data System (ADS)

    Elabd, H.; Villani, T. S.; Tower, J. R.

    1982-11-01

    Monolithic 32 x 64 and 64 x 128 palladium silicide (Pd2Si) interline transfer IRCCDs sensitive in the 1-3.5 pm spectral band have been developed. This silicon imager exhibits a low response nonuniformity of typically 0.2-1.6% rms, and has been operated in the temperature range between 40-140K. Spectral response measurements of test Pd2Si p-type Si devices yield quantum efficiencies of 7.9% at 1.25 μm, 5.6% at 1.65 μm and 2.2% at 2.22 μm. Improvement in quantum efficiency is expected by optimizing the different structural parameters of the Pd2Si detectors. The spectral response of the Pd2Si detectors fit a modified Fowler emission model. The measured photo-electric barrier height for the Pd2Si detector is ≍0.34 eV and the measured quantum efficiency coefficient, C1, is 19%/eV. The dark current level of Pd2Si Schottky barrier focal plane arrays (FPAs) is sufficiently low to enable operation at intermediate tem-peratures at TV frame rates. Typical dark current level measured at 120K on the FPA is 2 nA/cm2. The Pd2Si Schottky barrier imaging technology has been developed for satellite sensing of earth resources. The operating temperature of the Pd2Si FPA is compatible with passive cooler performance. In addition, high density Pd2Si Schottky barrier FPAs are manufactured with high yield and therefore represent an economical approach to short wavelength IR imaging. A Pd2Si Schottky barrier image sensor for push-broom multispectral imaging in the 1.25, 1.65, and 2.22 μm bands is being studied. The sensor will have two line arrays (dual band capability) of 512 detectors each, with 30 μm center-to-center detector spacing. The device will be suitable for chip-to-chip abutment, thus providing the capability to produce large, multiple chip focal planes with contiguous, in-line sensors.

  5. LANDSAT-4 Science Investigations Summary, Including December 1983 Workshop Results, Volume 1

    NASA Technical Reports Server (NTRS)

    Barker, J. L. (Editor)

    1984-01-01

    A general overview of the LANDSAT 4 system with emphasis on the Thematic Mapper (TM) is presented. A variety of topics on the design, calibration, capabilities, and image processing techniques of the TM sensor are discussed in detail. The comparison of TM data with other MSS data is also investigated.

  6. Optical Inspection In Hostile Industrial Environments: Single-Sensor VS. Imaging Methods

    NASA Astrophysics Data System (ADS)

    Cielo, P.; Dufour, M.; Sokalski, A.

    1988-11-01

    On-line and unsupervised industrial inspection for quality control and process monitoring is increasingly required in the modern automated factory. Optical techniques are particularly well suited to industrial inspection in hostile environments because of their noncontact nature, fast response time and imaging capabilities. Optical sensors can be used for remote inspection of high temperature products or otherwise inaccessible parts, provided they are in a line-of-sight relation with the sensor. Moreover, optical sensors are much easier to adapt to a variety of part shapes, position or orientation and conveyor speeds as compared to contact-based sensors. This is an important requirement in a flexible automation environment. A number of choices are possible in the design of optical inspection systems. General-purpose two-dimensional (2-D) or three-dimensional (3-D) imaging techniques have advanced very rapidly in the last years thanks to a substantial research effort as well as to the availability of increasingly powerful and affordable hardware and software. Imaging can be realized using 2-D arrays or simpler one-dimensional (1-D) line-array detectors. Alternatively, dedicated single-spot sensors require a smaller amount of data processing and often lead to robust sensors which are particularly appropriate to on-line operation in hostile industrial environments. Many specialists now feel that dedicated sensors or clusters of sensors are often more effective for specific industrial automation and control tasks, at least in the short run. This paper will discuss optomechanical and electro-optical choices with reference to the design of a number of on-line inspection sensors which have been recently developed at our institute. Case studies will include real-time surface roughness evaluation on polymer cables extruded at high speed, surface characterization of hot-rolled or galvanized-steel sheets, temperature evaluation and pinhole detection in aluminum foil, multi-wavelength polymer sheet thickness gauging and thermographic imaging, 3-D lumber profiling, line-array inspection of textiles and glassware, as well as on-line optical inspection for the control of automated arc welding. In each case the design choices between single or multiple-element detectors, mechanical vs. electronic scanning, laser vs. incoherent illumination, etc. will be discussed in terms of industrial constraints such as speed requirements, protection against the environment or reliability of the sensor output.

  7. Usaf Space Sensing Cryogenic Considerations

    NASA Astrophysics Data System (ADS)

    Roush, F.

    2010-04-01

    Infrared (IR) space sensing missions of the future depend upon low mass components and highly capable imaging technologies. Limitations in visible imaging due to the earth's shadow drive the use of IR surveillance methods for a wide variety of applications for Intelligence, Surveillance, and Reconnaissance (ISR), Ballistic Missile Defense (BMD) applications, and almost certainly in Space Situational Awareness (SSA) and Operationally Responsive Space (ORS) missions. Utilization of IR sensors greatly expands and improves mission capabilities including target and target behavioral discrimination. Background IR emissions and electronic noise that is inherently present in Focal Plane Arrays (FPAs) and surveillance optics bench designs prevents their use unless they are cooled to cryogenic temperatures. This paper describes the role of cryogenic coolers as an enabling technology for generic ISR and BMD missions and provides ISR and BMD mission and requirement planners with a brief glimpse of this critical technology implementation potential. The interaction between cryogenic refrigeration component performance and the IR sensor optics and FPA can be seen as not only mission enabling but also as mission performance enhancing when the refrigeration system is considered as part of an overall optimization problem.

  8. SPIDER: Next Generation Chip Scale Imaging Sensor

    NASA Astrophysics Data System (ADS)

    Duncan, Alan; Kendrick, Rick; Thurman, Sam; Wuchenich, Danielle; Scott, Ryan P.; Yoo, S. J. B.; Su, Tiehui; Yu, Runxiang; Ogden, Chad; Proiett, Roberto

    The LM Advanced Technology Center and UC Davis are developing an Electro-Optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that provides a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger aperture imager in a constrained volume. The SPIDER concept consists of thousands of direct detection white-light interferometers densely packed onto Photonic Integrated Circuits (PICs) to measure the amplitude and phase of the visibility function at spatial frequencies that span the full synthetic aperture. In other words, SPIDER would sample the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then digitally reconstruct an image. The conventional approach for imaging interferometers requires complex mechanical delay lines to form the interference fringes. This results in designs that are not traceable to more than a few simultaneous spatial frequency measurements. SPIDER seeks to achieve this traceability by employing micron-=scale optical waveguides and nanophotonic structures fabricated on a PIC with micron-scale packing density to form the necessary interferometers. Prior LM IRAD and DARPA/NASA CRAD-funded SPIDER risk reduction experiments, design trades, and simulations have matured the SPIDER imager concept to a TRL 3 level. Current funding under the DARPA SPIDER Zoom program is maturing the underlying PIC technology for SPIDER to the TRL 4 level. This is done by developing and fabricating a second-generation PIC that is fully traceable to the multiple layers and low-power phase modulators required for higher-dimension waveguide arrays that are needed for higher field-of-view sensors. Our project also seeks to extend the SPIDER concept to add a zoom capability that would provide simultaneous low-resolution, large field-of-view and steerable high-resolution, narrow field-of-view imaging modes. A proof of concept demo is being designed to validate this capability. Finally, data collected by this project would be used to benchmark and increase the fidelity of our SPIDER image simulations and enhance our ability to predict the performance of existing and future SPIDER sensor design variations. These designs and their associated performance characteristics could then be evaluated as candidates for future mission opportunities to identify specific transition paths. This paper provides an overview of performance data on the first-generation PIC for SPIDER developed under DARPA SeeMe program funding. We provide a design description of the SPICER Zoom imaging sensor and the second-generation PIC (high- and low-resolution versions) currently under development on the DARPA SPIDER Zoom effort. Results of performance simulations and design trades are presented. Unique low-cost payload applications for future SSA missions are also discussed.

  9. Smart sensor for terminal homing

    NASA Astrophysics Data System (ADS)

    Panda, D.; Aggarwal, R.; Hummel, R.

    1980-01-01

    The practical scene matching problem is considered to present certain complications which must extend classical image processing capabilities. Certain aspects of the scene matching problem which must be addressed by a smart sensor for terminal homing are discussed. First a philosophy for treating the matching problem for the terminal homing scenario is outlined. Then certain aspects of the feature extraction process and symbolic pattern matching are considered. It is thought that in the future general ideas from artificial intelligence will be more useful for terminal homing requirements of fast scene recognition and pattern matching.

  10. Planetary Remote Sensing Science Enabled by MIDAS (Multiple Instrument Distributed Aperture Sensor)

    NASA Technical Reports Server (NTRS)

    Pitman, Joe; Duncan, Alan; Stubbs, David; Sigler, Robert; Kendrick, Rick; Chilese, John; Lipps, Jere; Manga, Mike; Graham, James; dePater, Imke

    2004-01-01

    The science capabilities and features of an innovative and revolutionary approach to remote sensing imaging systems, aimed at increasing the return on future space science missions many fold, are described. Our concept, called Multiple Instrument Distributed Aperture Sensor (MIDAS), provides a large-aperture, wide-field, diffraction-limited telescope at a fraction of the cost, mass and volume of conventional telescopes, by integrating optical interferometry technologies into a mature multiple aperture array concept that addresses one of the highest needs for advancing future planetary science remote sensing.

  11. Functional design for operational earth resources ground data processing

    NASA Technical Reports Server (NTRS)

    Baldwin, C. J. (Principal Investigator); Bradford, L. H.; Hutson, D. E.; Jugle, D. R.

    1972-01-01

    The author has identified the following significant results. Study emphasis was on developing a unified concept for the required ground system, capable of handling data from all viable acquisition platforms and sensor groupings envisaged as supporting operational earth survey programs. The platforms considered include both manned and unmanned spacecraft in near earth orbit, and continued use of low and high altitude aircraft. The sensor systems include both imaging and nonimaging devices, operated both passively and actively, from the ultraviolet to the microwave regions of the electromagnetic spectrum.

  12. Model wavefront sensor for adaptive confocal microscopy

    NASA Astrophysics Data System (ADS)

    Booth, Martin J.; Neil, Mark A. A.; Wilson, Tony

    2000-05-01

    A confocal microscope permits 3D imaging of volume objects by the inclusion of a pinhole in the detector path which eliminates out of focus light. This configuration is however very sensitive to aberrations induced by the specimen or the optical system and would therefore benefit from an adaptive optics approach. We present a wavefront sensor capable of measuring directly the Zernike components of an aberrated wavefront and show that it is particularly applicable to the confocal microscope since only those wavefronts originating in the focal region contribute to the measured aberration.

  13. Unmanned Aircraft Systems Sensors

    DTIC Science & Technology

    2005-05-01

    to development of UAS and UA sensor capabilities UNCLASSIFIED Small UA EO/IR Sensors • EO – Requirement for a facial recognition capability while...UNCLASSIFIED Tactical UA EO/IR Sensors • EO – Requirement for a facial recognition capability while remaining undetected. (NIIRS 8+) • IR – Requirement for...Operational & Theater UA EO/IR Sensors • EO – Requirement for a facial recognition capability while remaining undetected. (NIIRS 8+) • IR – Requirement

  14. An electrically tunable plenoptic camera using a liquid crystal microlens array.

    PubMed

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-05-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.

  15. An electrically tunable plenoptic camera using a liquid crystal microlens array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074

    2015-05-15

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less

  16. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  17. An electrically tunable plenoptic camera using a liquid crystal microlens array

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-05-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.

  18. Diffractive optics technology and the NASA Geostationary Earth Observatory (GEO)

    NASA Technical Reports Server (NTRS)

    Morris, G. Michael; Michaels, Robert L.; Faklis, Dean

    1992-01-01

    Diffractive (or binary) optics offers unique capabilities for the development of large-aperture, high-performance, light-weight optical systems. The Geostationary Earth Observatory (GEO) will consist of a variety of instruments to monitor the environmental conditions of the earth and its atmosphere. The aim of this investigation is to analyze the design of the GEO instrument that is being proposed and to identify the areas in which diffractive (or binary) optics technology can make a significant impact in GEO sensor design. Several potential applications where diffractive optics may indeed serve as a key technology for improving the performance and reducing the weight and cost of the GEO sensors have been identified. Applications include the use of diffractive/refractive hybrid lenses for aft-optic imagers, diffractive telescopes for narrowband imaging, subwavelength structured surfaces for anti-reflection and polarization control, and aberration compensation for reflective imaging systems and grating spectrometers.

  19. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †

    PubMed Central

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-01

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434

  20. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.

    PubMed

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-10

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

  1. Electrical capacitance volume tomography with high contrast dielectrics using a cuboid sensor geometry

    NASA Astrophysics Data System (ADS)

    Nurge, Mark A.

    2007-05-01

    An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.

  2. Enhancing the capability of the research fleet.

    NASA Astrophysics Data System (ADS)

    Pinkel, R.

    2012-12-01

    While the performance and economics of our vessels and manned platforms are fixed by fundamental principles, their scientific capabilities can be considerably extended through the development of new technology. Potential future systems include multi-beam swath- mapping sonars for 3-D imaging of plankton patchiness, wire-guided profiling velocity sensors for establishing full-ocean-depth velocity profiles, shipboard HF radar (CODAR) for mapping energetic currents, and shipboard Doppler radar for mapping the surface wave spectrum. Research vessel users should have access to undersea gliders and autonomous aircraft as well as the current AUVs. In addition, the use of manned stable platforms in an observatory setting deserves further consideration. As well as providing an ideal mount for meteorological and oceanographic sensors, the platforms can provide electrical power and a "heavy lift" capability for sea floor and water column studies. Concerted community effort will be required to develop these new technologies, not all of which will be commercially viable. A strong academic technology base is necessary.

  3. Multi sensor satellite imagers for commercial remote sensing

    NASA Astrophysics Data System (ADS)

    Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.

    2005-10-01

    This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.

  4. The GOES-R GeoStationary Lightning Mapper (GLM)

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas

    2011-01-01

    The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved capability for the Advanced Baseline Imager (ABI). The Geostationary Lighting Mapper (GLM) will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency among a number of potential applications. In parallel with the instrument development (a prototype and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms (environmental data records), cal/val performance monitoring tools, and new applications using GLM alone, in combination with the ABI, merged with ground-based sensors, and decision aids augmented by numerical weather prediction model forecasts. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. An international field campaign planned for 2011-2012 will produce concurrent observations from a VHF lightning mapping array, Meteosat multi-band imagery, Tropical Rainfall Measuring Mission (TRMM) Lightning Imaging Sensor (LIS) overpasses, and related ground and in-situ lightning and meteorological measurements in the vicinity of Sao Paulo. These data will provide a new comprehensive proxy data set for algorithm and application development.

  5. Surface Wind Vector and Rain Rate Observation Capability of Future Hurricane Imaging Radiometer (HIRAD)

    NASA Technical Reports Server (NTRS)

    Miller, Timothy; Atlas, Robert; Bailey, M. C.; Black, Peter; El-Nimri, Salem; Hood, Robbie; James, Mark; Johnson, James; Jones, Linwood; Ruf, Christopher; hide

    2009-01-01

    The Hurricane Imaging Radiometer (HIRAD) is the next-generation Stepped Frequency Microwave Radiometer (SFMR), and it will offer the capability of simultaneous wide-swath observations of both extreme ocean surface wind vector and strong precipitation from either aircraft (including UAS) or satellite platforms. HIRAD will be a compact, lightweight, low-power instrument with no moving parts that will produce valid wind observations under hurricane conditions when existing microwave sensors (radiometers or scatterometers) are hindered by precipitation. The SFMR i s a proven aircraft remote sensing system for simultaneously observing extreme ocean surface wind speeds and rain rates, including those of major hurricane intensity. The proposed HIRAD instrument advances beyond the current nadir viewing SFMR to an equivalent wide-swath SFMR imager using passive microwave synthetic thinned aperture radiometer technology. The first version of the instrument will be a single polarization system for wind speed and rain rate, with a dual-polarization system to follow for wind vector capability. This sensor will operate over 4-7 GHz (C-band frequencies) where the required tropical cyclone remote sensing physics has been validated by both SFMR and WindSat radiometers. HIRAD incorporates a unique, technologically advanced array antenna and several other technologies successfully demonstrated by NASA s Instrument Incubator Program. A brassboard (laboratory) version of the instrument has been completed and successfully tested in a test chamber. Development of the aircraft instrument is underway, with flight testing planned for the fall of 2009. Preliminary Observing System Simulation Experiments (OSSEs) show that HIRAD will have a significant positive impact on surface wind analyses as either a new aircraft or satellite sensor. New off-nadir data collected in 2008 by SFMR that affirms the ability of this measurement technique to obtain wind speed data at non-zero incidence angle will be presented, as well as data from the brassboard instrument chamber tests.

  6. Evolution of miniature detectors and focal plane arrays for infrared sensors

    NASA Astrophysics Data System (ADS)

    Watts, Louis A.

    1993-06-01

    Sensors that are sensitive in the infrared spectral region have been under continuous development since the WW2 era. A quest for the military advantage of 'seeing in the dark' has pushed thermal imaging technology toward high spatial and temporal resolution for night vision equipment, fire control, search track, and seeker 'homing' guidance sensing devices. Similarly, scientific applications have pushed spectral resolution for chemical analysis, remote sensing of earth resources, and astronomical exploration applications. As a result of these developments, focal plane arrays (FPA) are now available with sufficient sensitivity for both high spatial and narrow bandwidth spectral resolution imaging over large fields of view. Such devices combined with emerging opto-electronic developments in integrated FPA data processing techniques can yield miniature sensors capable of imaging reflected sunlight in the near IR and emitted thermal energy in the Mid-wave (MWIR) and longwave (LWIR) IR spectral regions. Robotic space sensors equipped with advanced versions of these FPA's will provide high resolution 'pictures' of their surroundings, perform remote analysis of solid, liquid, and gas matter, or selectively look for 'signatures' of specific objects. Evolutionary trends and projections of future low power micro detector FPA developments for day/night operation or use in adverse viewing conditions are presented in the following test.

  7. Evolution of miniature detectors and focal plane arrays for infrared sensors

    NASA Technical Reports Server (NTRS)

    Watts, Louis A.

    1993-01-01

    Sensors that are sensitive in the infrared spectral region have been under continuous development since the WW2 era. A quest for the military advantage of 'seeing in the dark' has pushed thermal imaging technology toward high spatial and temporal resolution for night vision equipment, fire control, search track, and seeker 'homing' guidance sensing devices. Similarly, scientific applications have pushed spectral resolution for chemical analysis, remote sensing of earth resources, and astronomical exploration applications. As a result of these developments, focal plane arrays (FPA) are now available with sufficient sensitivity for both high spatial and narrow bandwidth spectral resolution imaging over large fields of view. Such devices combined with emerging opto-electronic developments in integrated FPA data processing techniques can yield miniature sensors capable of imaging reflected sunlight in the near IR and emitted thermal energy in the Mid-wave (MWIR) and longwave (LWIR) IR spectral regions. Robotic space sensors equipped with advanced versions of these FPA's will provide high resolution 'pictures' of their surroundings, perform remote analysis of solid, liquid, and gas matter, or selectively look for 'signatures' of specific objects. Evolutionary trends and projections of future low power micro detector FPA developments for day/night operation or use in adverse viewing conditions are presented in the following test.

  8. Fusion of imaging and nonimaging data for surveillance aircraft

    NASA Astrophysics Data System (ADS)

    Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre

    1997-06-01

    This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).

  9. Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs).

    PubMed

    Jaramillo, Carlos; Valenti, Roberto G; Guo, Ling; Xiao, Jizhong

    2016-02-06

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor's projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  10. An extreme events laboratory to provide network centric collaborative situation assessment and decision making

    NASA Astrophysics Data System (ADS)

    Panulla, Brian J.; More, Loretta D.; Shumaker, Wade R.; Jones, Michael D.; Hooper, Robert; Vernon, Jeffrey M.; Aungst, Stanley G.

    2009-05-01

    Rapid improvements in communications infrastructure and sophistication of commercial hand-held devices provide a major new source of information for assessing extreme situations such as environmental crises. In particular, ad hoc collections of humans can act as "soft sensors" to augment data collected by traditional sensors in a net-centric environment (in effect, "crowd-sourcing" observational data). A need exists to understand how to task such soft sensors, characterize their performance and fuse the data with traditional data sources. In order to quantitatively study such situations, as well as study distributed decision-making, we have developed an Extreme Events Laboratory (EEL) at The Pennsylvania State University. This facility provides a network-centric, collaborative situation assessment and decision-making capability by supporting experiments involving human observers, distributed decision making and cognition, and crisis management. The EEL spans the information chain from energy detection via sensors, human observations, signal and image processing, pattern recognition, statistical estimation, multi-sensor data fusion, visualization and analytics, and modeling and simulation. The EEL command center combines COTS and custom collaboration tools in innovative ways, providing capabilities such as geo-spatial visualization and dynamic mash-ups of multiple data sources. This paper describes the EEL and several on-going human-in-the-loop experiments aimed at understanding the new collective observation and analysis landscape.

  11. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  12. Snapshot Imaging Spectrometry in the Visible and Long Wave Infrared

    NASA Astrophysics Data System (ADS)

    Maione, Bryan David

    Imaging spectrometry is an optical technique in which the spectral content of an object is measured at each location in space. The main advantage of this modality is that it enables characterization beyond what is possible with a conventional camera, since spectral information is generally related to the chemical composition of the object. Due to this, imaging spectrometers are often capable of detecting targets that are either morphologically inconsistent, or even under resolved. A specific class of imaging spectrometer, known as a snapshot system, seeks to measure all spatial and spectral information simultaneously, thereby rectifying artifacts associated with scanning designs, and enabling the measurement of temporally dynamic scenes. Snapshot designs are the focus of this dissertation. Three designs for snapshot imaging spectrometers are developed, each providing novel contributions to the field of imaging spectrometry. In chapter 2, the first spatially heterodyned snapshot imaging spectrometer is modeled and experimentally validated. Spatial heterodyning is a technique commonly implemented in non-imaging Fourier transform spectrometry. For Fourier transform imaging spectrometers, spatial heterodyning improves the spectral resolution trade space. Additionally, in this chapter a unique neural network based spectral calibration is developed and determined to be an improvement beyond Fourier and linear operator based techniques. Leveraging spatial heterodyning as developed in chapter 2, in chapter 3, a high spectral resolution snapshot Fourier transform imaging spectrometer, based on a Savart plate interferometer, is developed and experimentally validated. The sensor presented in this chapter is the highest spectral resolution sensor in its class. High spectral resolution enables the sensor to discriminate narrowly spaced spectral lines. The capabilities of neural networks in imaging spectrometry are further explored in this chapter. Neural networks are used to perform single target detection on raw instrument data, thereby eliminating the need for an explicit spectral calibration step. As an extension of the results in chapter 2, neural networks are once again demonstrated to be an improvement when compared to linear operator based detection. In chapter 4 a non-interferometric design is developed for the long wave infrared (wavelengths spanning 8-12 microns). The imaging spectrometer developed in this chapter is a multi-aperture filtered microbolometer. Since the detector is uncooled, the presented design is ultra-compact and low power. Additionally, cost effective polymer absorption filters are used in lieu of interference filters. Since, each measurement of the system is spectrally multiplexed, an SNR advantage is realized. A theoretical model for the filtered design is developed, and the performance of the sensor for detecting liquid contaminants is investigated. Similar to past chapters, neural networks are used and achieve false detection rates of less than 1%. Lastly, this dissertation is concluded with a discussion on future work and potential impact of these devices.

  13. The International Space Station: New Capabilities for Disaster Response and Humanitarian Aid

    NASA Technical Reports Server (NTRS)

    Stefanov, William

    2012-01-01

    The International Space Station (ISS) has been acquiring Earth imagery since 2000, primarily in the form of astronaut photography using hand-held film and digital cameras. Recent additions of more sophisticated multispectral and hyperspectral sensor systems have expanded both the capabilities and relevance of the ISS to basic research, applied Earth science, and development of new sensor technologies. Funding opportunities established within NASA, the US National Laboratories and the international partner organizations have generated instrument proposals that will further enhance these capabilities. With both internal and external sensor location options, and the availability of both automated and human-tended operational environments, the ISS is a unique platform within the constellation of Earth-observing satellites currently in orbit. Current progress and challenges associated with development of ISS terrestrial remote sensing capabilities in the area of disaster response and support of relief efforts will be presented. The ISS orbit allows for imaging of the Earth's surface at varying times of day and night, providing opportunities for data collection over approximately 95% of the populated regions. These opportunities are distinct from--yet augment--the data collection windows for the majority of sensors on polar-orbiting satellites. In addition to this potential for "being in the right place at the right time" to collect critical information on an evolving disaster, the presence of a human crew also allows for immediate recognition of an event from orbit, notification of relevant organizations on the ground, and re-tasking of available remote sensing resources to support humanitarian response and relief efforts. Challenges to establishing an integrated response capability are both technical (coordination of sensor targeting and data collection, rapid downlink and posting of data to a central accessible hub, timely generation and distribution of relevant data products) and operational (notification and engagement of sensor support teams, international partner agency sanction of astronaut support activities). To better collaborate on common issues and strengthen applications, including using the data to support disaster response, we established an ISS Program Science Forum Working Group for Earth Observations comprised of representatives from the international partner agencies. This international forum welcomes input and support from relevant United Nations task groups regarding our disaster response and humanitarian aid to enable development of the ISS capabilities in this area for greatest value to the international community.

  14. Studies of prototype DEPFET sensors for the Wide Field Imager of Athena

    NASA Astrophysics Data System (ADS)

    Treberspurg, Wolfgang; Andritschke, Robert; Bähr, Alexander; Behrens, Annika; Hauser, Günter; Lechner, Peter; Meidinger, Norbert; Müller-Seidlitz, Johannes; Treis, Johannes

    2017-08-01

    The Wide Field Imager (WFI) of ESA's next X-ray observatory Athena will combine a high count rate capability with a large field of view, both with state-of-the-art spectroscopic performance. To meet these demands, specific DEPFET active pixel detectors have been developed and operated. Due to the intrinsic amplification of detected signals they are best suited to achieve a high speed and low noise performance. Different fabrication technologies and transistor geometries have been implemented on a dedicated prototype production in the course of the development of the DEPFET sensors. The main modifications between the sensors concern the shape of the transistor gate - regarding the layout - and the thickness of the gate oxide - regarding the technology. To facilitate the fabrication and testing of the resulting variety of sensors the presented studies were carried out with 64×64 pixel detectors. The detector comprises a control ASIC (Switcher-A), a readout ASIC (VERITAS- 2) and the sensor. In this paper we give an overview on the evaluation of different prototype sensors. The most important results, which have been decisive for the identification of the optimal fabrication technology and transistor layout for subsequent sensor productions are summarized. It will be shown that the developments result in an excellent performance of spectroscopic X-ray DEPFETs with typical noise values below 2.5 ENC at 2.5 μs/row.

  15. Synthetic Foveal Imaging Technology

    NASA Technical Reports Server (NTRS)

    Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh

    2009-01-01

    Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.

  16. A Vision of Quantitative Imaging Technology for Validation of Advanced Flight Technologies

    NASA Technical Reports Server (NTRS)

    Horvath, Thomas J.; Kerns, Robert V.; Jones, Kenneth M.; Grinstead, Jay H.; Schwartz, Richard J.; Gibson, David M.; Taylor, Jeff C.; Tack, Steve; Dantowitz, Ronald F.

    2011-01-01

    Flight-testing is traditionally an expensive but critical element in the development and ultimate validation and certification of technologies destined for future operational capabilities. Measurements obtained in relevant flight environments also provide unique opportunities to observe flow phenomenon that are often beyond the capabilities of ground testing facilities and computational tools to simulate or duplicate. However, the challenges of minimizing vehicle weight and internal complexity as well as instrumentation bandwidth limitations often restrict the ability to make high-density, in-situ measurements with discrete sensors. Remote imaging offers a potential opportunity to noninvasively obtain such flight data in a complementary fashion. The NASA Hypersonic Thermodynamic Infrared Measurements Project has demonstrated such a capability to obtain calibrated thermal imagery on a hypersonic vehicle in flight. Through the application of existing and accessible technologies, the acreage surface temperature of the Shuttle lower surface was measured during reentry. Future hypersonic cruise vehicles, launcher configurations and reentry vehicles will, however, challenge current remote imaging capability. As NASA embarks on the design and deployment of a new Space Launch System architecture for access beyond earth orbit (and the commercial sector focused on low earth orbit), an opportunity exists to implement an imagery system and its supporting infrastructure that provides sufficient flexibility to incorporate changing technology to address the future needs of the flight test community. A long term vision is offered that supports the application of advanced multi-waveband sensing technology to aid in the development of future aerospace systems and critical technologies to enable highly responsive vehicle operations across the aerospace continuum, spanning launch, reusable space access and global reach. Motivations for development of an Agency level imagery-based measurement capability to support cross cutting applications that span the Agency mission directorates as well as meeting potential needs of the commercial sector and national interests of the Intelligence, Surveillance and Reconnaissance community are explored. A recommendation is made for an assessment study to baseline current imaging technology including the identification of future mission requirements. Development of requirements fostered by the applications suggested in this paper would be used to identify technology gaps and direct roadmapping for implementation of an affordable and sustainable next generation sensor/platform system.

  17. Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision

    NASA Astrophysics Data System (ADS)

    Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.

    2018-01-01

    The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.

  18. An advanced wide area chemical sensor testbed

    NASA Astrophysics Data System (ADS)

    Seeley, Juliette A.; Kelly, Michael; Wack, Edward; Ryan-Howard, Danette; Weidler, Darryl; O'Brien, Peter; Colonero, Curtis; Lakness, John; Patel, Paras

    2005-11-01

    In order to meet current and emerging needs for remote passive standoff detection of chemical agent threats, MIT Lincoln Laboratory has developed a Wide Area Chemical Sensor (WACS) testbed. A design study helped define the initial concept, guided by current standoff sensor mission requirements. Several variants of this initial design have since been proposed to target other applications within the defense community. The design relies on several enabling technologies required for successful implementation. The primary spectral component is a Wedged Interferometric Spectrometer (WIS) capable of imaging in the LWIR with spectral resolutions as narrow as 4 cm-1. A novel scanning optic will enhance the ability of this sensor to scan over large areas of concern with a compact, rugged design. In this paper, we shall discuss our design, development, and calibration process for this system as well as recent testbed measurements that validate the sensor concept.

  19. Smart sensing surveillance system

    NASA Astrophysics Data System (ADS)

    Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen

    2010-04-01

    Unattended ground sensor (UGS) networks have been widely used in remote battlefield and other tactical applications over the last few decades due to the advances of the digital signal processing. The UGS network can be applied in a variety of areas including border surveillance, special force operations, perimeter and building protection, target acquisition, situational awareness, and force protection. In this paper, a highly-distributed, fault-tolerant, and energyefficient Smart Sensing Surveillance System (S4) is presented to efficiently provide 24/7 and all weather security operation in a situation management environment. The S4 is composed of a number of distributed nodes to collect, process, and disseminate heterogeneous sensor data. Nearly all S4 nodes have passive sensors to provide rapid omnidirectional detection. In addition, Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR cameras are integrated to selected nodes to track the objects and capture associated imagery. These S4 camera-connected nodes will provide applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. In the S4, all the nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology, which can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The S4 utilizes a Service Oriented Architecture such that remote applications can interact with the S4 network and use the specific presentation methods. The S4 capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments and near perimeters and borders. The S4 is compliant with Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE®) standards. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.

  20. Community tools for cartographic and photogrammetric processing of Mars Express HRSC images

    USGS Publications Warehouse

    Kirk, Randolph L.; Howington-Kraus, Elpitha; Edmundson, Kenneth L.; Redding, Bonnie L.; Galuszka, Donna M.; Hare, Trent M.; Gwinner, K.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.

    2017-01-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged ~ 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was necessary to split observations into blocks of constant exposure time, greatly increasing the effort needed to control the images and collect DTMs. Here, we describe a substantially improved HRSC processing capability that incorporates sensor models with varying line timing in the current ISIS3 system (Sides 2017) and SOCET SET. This enormously reduces the work effort for processing most images and eliminates the artifacts that arose from segmenting them. In addition, the software takes advantage of the continuously evolving capabilities of ISIS3 and the improved image matching module NGATE (Next Generation Automatic Terrain Extraction, incorporating area and feature based algorithms, multi-image and multi-direction matching) of SOCET SET, thus greatly reducing the need for manual editing of DTM errors. We have also developed a procedure for geodetically controlling the images to Mars Orbiter Laser Altimeter (MOLA) data by registering a preliminary stereo topographic model to MOLA by using the point cloud alignment (pc_align) function of the NASA Ames Stereo Pipeline (ASP; Moratto et al. 2010). This effectively converts inter-image tiepoints into ground control points in the MOLA coordinate system. The result is improved absolute accuracy and a significant reduction in work effort relative to manual measurement of ground control. The ISIS and ASP software used are freely available; SOCET SET, is a commercial product. By the end of 2017 we expect to have ported our SOCET SET HRSC sensor model to the Community Sensor Model (CSM; Community Sensor Model Working Group 2010; Hare and Kirk 2017) standard utilized by the successor photogrammetric system SOCET GXP that is currently offered by BAE. In early 2018, we are also working with BAE to release the CSM source code under a BSD or MIT open source license. 

  1. Processing and analysis of commercial satellite image data of the nuclear accident near Chernobyl, U. S. S. R

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadowski, F.G.; Covington, S.J.

    1987-01-01

    Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT high-resolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear power plant emergency at Chernobyl in the Soviet Ukraine. The results of the data processing and analysis illustrate the spectral and spatial capabilities of the two sensor systems and provide information about the severity and duration of the events occurring at the power plant site.

  2. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  3. An image based vibration sensor for soft tissue modal analysis in a Digital Image Elasto Tomography (DIET) system.

    PubMed

    Feng, Sheng; Lotz, Thomas; Chase, J Geoffrey; Hann, Christopher E

    2010-01-01

    Digital Image Elasto Tomography (DIET) is a non-invasive elastographic breast cancer screening technology, based on image-based measurement of surface vibrations induced on a breast by mechanical actuation. Knowledge of frequency response characteristics of a breast prior to imaging is critical to maximize the imaging signal and diagnostic capability of the system. A feasibility analysis for a non-invasive image based modal analysis system is presented that is able to robustly and rapidly identify resonant frequencies in soft tissue. Three images per oscillation cycle are enough to capture the behavior at a given frequency. Thus, a sweep over critical frequency ranges can be performed prior to imaging to determine critical imaging settings of the DIET system to optimize its tumor detection performance.

  4. Ultra-Sensitive Photoreceiver Boosts Data Transmission

    NASA Technical Reports Server (NTRS)

    2007-01-01

    NASA depends on advanced, ultra-sensitive photoreceivers and photodetectors to provide high-data communications and pinpoint image-detection and -recognition capabilities from great distances. In 2003, Epitaxial Technologies LLC was awarded a Small Business Innovation Research (SBIR) contract from Goddard Space Flight Center to address needs for advanced sensor components. Epitaxial developed a photoreciever capable of single proton sensitivity that is also smaller, lighter, and requires less power than its predecessor. This receiver operates in several wavelength ranges; will allow data rate transmissions in the terabit range; and will enhance Earth-based missions for remote sensing of crops and other natural resources, including applications for fluorescence and phosphorescence detection. Widespread military and civilian applications are anticipated, especially through enhancing fiber optic communications, laser imaging, and laser communications.

  5. Earth Surface Monitoring with COSI-Corr, Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Leprince, S.; Ayoub, F.; Avouac, J.

    2009-12-01

    Co-registration of Optically Sensed Images and Correlation (COSI-Corr) is a software package developed at the California Institute of Technology (USA) for accurate geometrical processing of optical satellite and aerial imagery. Initially developed for the measurement of co-seismic ground deformation using optical imagery, COSI-Corr is now used for a wide range of applications in Earth Sciences, which take advantage of the software capability to co-register, with very high accuracy, images taken from different sensors and acquired at different times. As long as a sensor is supported in COSI-Corr, all images between the supported sensors can be accurately orthorectified and co-registered. For example, it is possible to co-register a series of SPOT images, a series of aerial photographs, as well as to register a series of aerial photographs with a series of SPOT images, etc... Currently supported sensors include the SPOT 1-5, Quickbird, Worldview 1 and Formosat 2 satellites, the ASTER instrument, and frame camera acquisitions from e.g., aerial survey or declassified satellite imagery. Potential applications include accurate change detection between multi-temporal and multi-spectral images, and the calibration of pushbroom cameras. In particular, COSI-Corr provides a powerful correlation tool, which allows for accurate estimation of surface displacement. The accuracy depends on many factors (e.g., cloud, snow, and vegetation cover, shadows, temporal changes in general, steadiness of the imaging platform, defects of the imaging system, etc...) but in practice, the standard deviation of the measurements obtained from the correlation of mutli-temporal images is typically around 1/20 to 1/10 of the pixel size. The software package also includes post-processing tools such as denoising, destriping, and stacking tools to facilitate data interpretation. Examples drawn from current research in, e.g., seismotectonics, glaciology, and geomorphology will be presented. COSI-Corr is developed in IDL (Interactive Data Language), integrated under the user friendly interface ENVI (Environment for Visualizing Images), and is distributed free of charge for academic research purposes.

  6. High Resolution Near Real Time Image Processing and Support for MSSS Modernization

    NASA Astrophysics Data System (ADS)

    Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.

    2012-09-01

    This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.

  7. Assessing the capabilities of hyperspectral remote sensing to map oil films on waters

    NASA Astrophysics Data System (ADS)

    Liu, Bingxin; Li, Ying; Zhu, Xueyuan

    2014-11-01

    The harm of oil spills has caused extensive public concern. Remote sensing technology has become one of the most effective means of monitoring oil spill. However, how to evaluate the information extraction capabilities of various sensors and choose the most effective one has become an important issue. The current evaluation of sensors to detect oil films was mainly using in-situ measured spectra as a reference to determine the favorable band, but ignoring the effects of environmental noise and spectral response function. To understand the precision and accuracy of environment variables acquired from remote sensing, it is important to evaluate the target detection sensitivity of the entire sensor-air-target system corresponding to the change of reflectivity. The measurement data associated with the evaluation is environmental noise equivalent reflectance difference (NEΔRE ), which depends on the instrument signal to noise ratio(SNR) and other image data noise (such as atmospheric variables, scattered sky light scattering and direct sunlight, etc.). Hyperion remote sensing data is taken as an example for evaluation of its oil spill detection capabilities with the prerequisite that the impact of the spatial resolution is ignored. In order to evaluate the sensor's sensitivity of the film of water, the reflectance spectral data of light diesel and crude oil film were used. To obtain Hyperion reflectance data, we used FLAASH to do the atmospheric correction. The spectral response functions of Hyperion sensor was used for filtering the measured reflectance of the oil films to the theoretic spectral response. Then, these spectral response spectra were normalized to NEΔRE, according to which, the sensitivity of the sensor in oil film detecting could be evaluated. For crude oil, the range for Hyperion sensor to identify the film is within the wavelength from 518nm to 610nm (Band 17 to Band 26 of Hyperion sensors), within which the thin film and thick film can also be distinguished. For light diesel oil film, the range for Hyperion sensor to identify the film is within the wavelength from 468nm to 752nm (Band 12 to Band 40 of Hyperion sensors).

  8. Atomic force microscope based on vertical silicon probes

    NASA Astrophysics Data System (ADS)

    Walter, Benjamin; Mairiaux, Estelle; Faucher, Marc

    2017-06-01

    A family of silicon micro-sensors for Atomic Force Microscope (AFM) is presented that allows to operate with integrated transducers from medium to high frequencies together with moderate stiffness constants. The sensors are based on Micro-Electro-Mechanical-Systems technology. The vertical design specifically enables a long tip to oscillate perpendicularly to the surface to be imaged. The tip is part of a resonator including quasi-flexural composite beams, and symmetrical transducers that can be used as piezoresistive detector and/or electro-thermal actuator. Two vertical probes (Vprobes) were operated up to 4.3 MHz with stiffness constants 150 N/m to 500 N/m and the capability to oscillate from 10 pm to 90 nm. AFM images of several samples both in amplitude modulation (tapping-mode) and in frequency modulation were obtained.

  9. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  10. An Efficient Image Recovery Algorithm for Diffraction Tomography Systems

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1993-01-01

    A diffraction tomography system has potential application in ultrasonic medical imaging area. It is capable of achieving imagery with the ultimate resolution of one quarter the wavelength by collecting ultrasonic backscattering data from a circular array of sensors and reconstructing the object reflectivity using a digital image recovery algorithm performed by a computer. One advantage of such a system is that is allows a relatively lower frequency wave to penetrate more deeply into the object and still achieve imagery with a reasonable resolution. An efficient image recovery algorithm for the diffraction tomography system was originally developed for processing a wide beam spaceborne SAR data...

  11. The Advanced Linked Extended Reconnaissance & Targeting Technology Demonstration project

    NASA Astrophysics Data System (ADS)

    Edwards, Mark

    2008-04-01

    The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing many operational needs of the future Canadian Army's Surveillance and Reconnaissance forces. Using the surveillance system of the Coyote reconnaissance vehicle as an experimental platform, the ALERT TD project aims to significantly enhance situational awareness by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. The project is exploiting important advances made in computer processing capability, displays technology, digital communications, and sensor technology since the design of the original surveillance system. As the major research area within the project, concepts are discussed for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as from beyond line-of-sight systems such as mini-UAVs and unattended ground sensors. Video-rate image processing has been developed to assist the operator to detect poorly visible targets. As a second major area of research, automatic target cueing capabilities have been added to the system. These include scene change detection, automatic target detection and aided target recognition algorithms processing both IR and visible-band images to draw the operator's attention to possible targets. The merits of incorporating scene change detection algorithms are also discussed. In the area of multi-sensor data fusion, up to Joint Defence Labs level 2 has been demonstrated. The human factors engineering aspects of the user interface in this complex environment are presented, drawing upon multiple user group sessions with military surveillance system operators. The paper concludes with Lessons Learned from the project. The ALERT system has been used in a number of C4ISR field trials, most recently at Exercise Empire Challenge in China Lake CA, and at Trial Quest in Norway. Those exercises provided further opportunities to investigate operator interactions. The paper concludes with recommendations for future work in operator interface design.

  12. Large-Area Plasma-Panel Radiation Detectors for Nuclear Medicine Imaging to Homeland Security and the Super Large Hadron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, Dr. Peter S.; Ball, Robert; Chapman, J. Wehrley

    2010-01-01

    A new radiation sensor derived from plasma panel display technology is introduced. It has the capability to detect ionizing and non-ionizing radiation over a wide energy range and the potential for use in many applications. The principle of operation is described and some early results presented.

  13. Soldier systems sensor fusion

    NASA Astrophysics Data System (ADS)

    Brubaker, Kathryne M.

    1998-08-01

    This paper addresses sensor fusion and its applications in emerging Soldier Systems integration and the unique challenges associated with the human platform. Technology that,provides the highest operational payoff in a lightweight warrior system must not only have enhanced capabilities, but have low power components resulting in order of magnitude reductions coupled with significant cost reductions. These reductions in power and cost will be achieved through partnership with industry and leveraging of commercial state of the art advancements in microelectronics and power sources. As new generation of full solution fire control systems (to include temperature, wind and range sensors) and target acquisition systems will accompany a new generation of individual combat weapons and upgrade existing weapon systems. Advanced lightweight thermal, IR, laser and video senors will be used for surveillance, target acquisition, imaging and combat identification applications. Multifunctional sensors will provide embedded training features in combat configurations allowing the soldier to 'train as he fights' without the traditional cost and weight penalties associated with separate systems. Personal status monitors (detecting pulse, respiration rate, muscle fatigue, core temperature, etc.) will provide commanders and highest echelons instantaneous medical data. Seamless integration of GPS and dead reckoning (compass and pedometer) and/or inertial sensors will aid navigation and increase position accuracy. Improved sensors and processing capability will provide earlier detection of battlefield hazards such as mines, enemy lasers and NBC (nuclear, biological, chemical) agents. Via the digitized network the situational awareness database will automatically be updated with weapon, medical, position and battlefield hazard data. Soldier Systems Sensor Fusion will ultimately establish each individual soldier as an individual sensor on the battlefield.

  14. Multiple-Event, Single-Photon Counting Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  15. Preliminary study of the reliability of imaging charge coupled devices

    NASA Technical Reports Server (NTRS)

    Beall, J. R.; Borenstein, M. D.; Homan, R. A.; Johnson, D. L.; Wilson, D. D.; Young, V. F.

    1978-01-01

    Imaging CCDs are capable of low light level response and high signal-to-noise ratios. In space applications they offer the user the ability to achieve extremely high resolution imaging with minimum circuitry in the photo sensor array. This work relates the CCD121H Fairchild device to the fundamentals of CCDs and the representative technologies. Several failure modes are described, construction is analyzed and test results are reported. In addition, the relationship of the device reliability to packaging principles is analyzed and test data presented. Finally, a test program is defined for more general reliability evaluation of CCDs.

  16. Automatic building identification under bomb damage conditions

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II

    2009-05-01

    Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.

  17. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    PubMed

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  18. Dual-polarized light-field imaging micro-system via a liquid-crystal microlens array for direct three-dimensional observation.

    PubMed

    Xin, Zhaowei; Wei, Dong; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2018-02-19

    Light-field imaging is a crucial and straightforward way of measuring and analyzing surrounding light worlds. In this paper, a dual-polarized light-field imaging micro-system based on a twisted nematic liquid-crystal microlens array (TN-LCMLA) for direct three-dimensional (3D) observation is fabricated and demonstrated. The prototyped camera has been constructed by integrating a TN-LCMLA with a common CMOS sensor array. By switching the working state of the TN-LCMLA, two orthogonally polarized light-field images can be remapped through the functioned imaging sensors. The imaging micro-system in conjunction with the electric-optical microstructure can be used to perform polarization and light-field imaging, simultaneously. Compared with conventional plenoptic cameras using liquid-crystal microlens array, the polarization-independent light-field images with a high image quality can be obtained in the arbitrary polarization state selected. We experimentally demonstrate characters including a relatively wide operation range in the manipulation of incident beams and the multiple imaging modes, such as conventional two-dimensional imaging, light-field imaging, and polarization imaging. Considering the obvious features of the TN-LCMLA, such as very low power consumption, providing multiple imaging modes mentioned, simple and low-cost manufacturing, the imaging micro-system integrated with this kind of liquid-crystal microstructure driven electrically presents the potential capability of directly observing a 3D object in typical scattering media.

  19. Low SWaP multispectral sensors using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  20. Monitoring Pest Insect Traps by Means of Low-Power Image Sensor Technologies

    PubMed Central

    López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P.; Bonastre, Alberto; Serrano, Juan J.

    2012-01-01

    Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.). PMID:23202232

  1. Monitoring pest insect traps by means of low-power image sensor technologies.

    PubMed

    López, Otoniel; Rach, Miguel Martinez; Migallon, Hector; Malumbres, Manuel P; Bonastre, Alberto; Serrano, Juan J

    2012-11-13

    Monitoring pest insect populations is currently a key issue in agriculture and forestry protection. At the farm level, human operators typically must perform periodical surveys of the traps disseminated through the field. This is a labor-, time- and cost-consuming activity, in particular for large plantations or large forestry areas, so it would be of great advantage to have an affordable system capable of doing this task automatically in an accurate and a more efficient way. This paper proposes an autonomous monitoring system based on a low-cost image sensor that it is able to capture and send images of the trap contents to a remote control station with the periodicity demanded by the trapping application. Our autonomous monitoring system will be able to cover large areas with very low energy consumption. This issue would be the main key point in our study; since the operational live of the overall monitoring system should be extended to months of continuous operation without any kind of maintenance (i.e., battery replacement). The images delivered by image sensors would be time-stamped and processed in the control station to get the number of individuals found at each trap. All the information would be conveniently stored at the control station, and accessible via Internet by means of available network services at control station (WiFi, WiMax, 3G/4G, etc.).

  2. Electric Potential and Electric Field Imaging with Dynamic Applications & Extensions

    NASA Technical Reports Server (NTRS)

    Generazio, Ed

    2017-01-01

    The technology and methods for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field made be used for volumes to be inspected with EFI. The baseline sensor technology (e-Sensor) and its construction, optional electric field generation (quasi-static generator), and current e- Sensor enhancements (ephemeral e-Sensor) are discussed. Critical design elements of current linear and real-time two-dimensional (2D) measurement systems are highlighted, and the development of a three dimensional (3D) EFI system is presented. Demonstrations for structural, electronic, human, and memory applications are shown. Recent work demonstrates that phonons may be used to create and annihilate electric dipoles within structures. Phonon induced dipoles are ephemeral and their polarization, strength, and location may be quantitatively characterized by EFI providing a new subsurface Phonon-EFI imaging technology. Results from real-time imaging of combustion and ion flow, and their measurement complications, will be discussed. Extensions to environment, Space and subterranean applications will be presented, and initial results for quantitative characterizing material properties are shown. A wearable EFI system has been developed by using fundamental EFI concepts. These new EFI capabilities are demonstrated to characterize electric charge distribution creating a new field of study embracing areas of interest including electrostatic discharge (ESD) mitigation, manufacturing quality control, crime scene forensics, design and materials selection for advanced sensors, combustion science, on-orbit space potential, container inspection, remote characterization of electronic circuits and level of activation, dielectric morphology of structures, tether integrity, organic molecular memory, atmospheric science, weather prediction, earth quake prediction, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.

  3. Sparse array of RF sensors for sensing through the wall

    NASA Astrophysics Data System (ADS)

    Innocenti, Roberto

    2007-04-01

    In support of the U.S. Army's need for intelligence on the configuration, content, and human presence inside enclosed areas (buildings), the Army Research Laboratory is currently engaged in an effort to evaluate RF sensors for the "Sensing Through The Wall" initiative (STTW).Detection and location of the presence of enemy combatants in urban settings poses significant technical and operational challenges. This paper shows the potential of hand held RF sensors, with the possible assistance of additional sources like Unattended Aerial Vehicles (UAV), Unattended Ground Sensors (UGS), etc, to fulfill this role. In this study we examine both monostatic and multistatic combination of sensors, especially in configurations that allow the capture of images from different angles, and we demonstrate their capability to provide comprehensive information on a variety of buildings. Finally, we explore the limitations of this type of sensor arrangement vis-a-vis the required precision in the knowledge of the position and timing of the RF sensors. Simulation results are provided to show the potential of this type of sensor arrangement in such a difficult environment.

  4. Reimagining Building Sensing and Control (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polese, L.

    2014-06-01

    Buildings are responsible for 40% of US energy consumption, and sensing and control technologies are an important element in creating a truly sustainable built environment. Motion-based occupancy sensors are often part of these control systems, but are usually altered or disabled in response to occupants' complaints, at the expense of energy savings. Can we leverage commodity hardware developed for other sectors and embedded software to produce more capable sensors for robust building controls? The National Renewable Energy Laboratory's (NREL) 'Image Processing Occupancy Sensor (IPOS)' is one example of leveraging embedded systems to create smarter, more reliable, multi-function sensors that openmore » the door to new control strategies for building heating, cooling, ventilation, and lighting control. In this keynote, we will discuss how cost-effective embedded systems are changing the state-of-the-art of building sensing and control.« less

  5. In vivo sodium concentration continuously monitored with fluorescent sensors.

    PubMed

    Dubach, J Matthew; Lim, Edward; Zhang, Ning; Francis, Kevin P; Clark, Heather

    2011-02-01

    Sodium balance is vital to maintaining normal physiological function. Imbalances can occur in a variety of diseases, during certain surgical operations or during rigorous exercise. There is currently no method to continuously monitor sodium concentration in patients who may be susceptible to hyponatremia. Our approach was to design sodium specific fluorescent sensors capable of measuring physiological fluctuations in sodium concentration. The sensors are submicron plasticized polymer particles containing sodium recognition components that are coated with biocompatible poly(ethylene) glycol. Here, the sensors were brought up in saline and placed in the subcutaneous area of the skin of mice by simple injection. The fluorescence was monitored in real time using a whole animal imager to track changes in sodium concentrations. This technology could be used to monitor certain disease states or warn against dangerously low levels of sodium during exercise.

  6. A system for respiratory motion detection using optical fibers embedded into textiles.

    PubMed

    D'Angelo, L T; Weber, S; Honda, Y; Thiel, T; Narbonneau, F; Luth, T C

    2008-01-01

    In this contribution, a first prototype for mobile respiratory motion detection using optical fibers embedded into textiles is presented. The developed system consists of a T-shirt with an integrated fiber sensor and a portable monitoring unit with a wireless communication link enabling the data analysis and visualization on a PC. A great effort is done worldwide to develop mobile solutions for health monitoring of vital signs for patients needing continuous medical care. Wearable, comfortable and smart textiles incorporating sensors are good approaches to solve this problem. In most of the cases, electrical sensors are integrated, showing significant limits such as for the monitoring of anaesthetized patients during Magnetic Resonance Imaging (MRI). OFSETH (Optical Fibre Embedded into technical Textile for Healthcare) uses optical sensor technologies to extend the current capabilities of medical technical textiles.

  7. Advanced optical position sensors for magnetically suspended wind tunnel models

    NASA Technical Reports Server (NTRS)

    Lafleur, S.

    1985-01-01

    A major concern to aerodynamicists has been the corruption of wind tunnel test data by model support structures, such as stings or struts. A technique for magnetically suspending wind tunnel models was considered by Tournier and Laurenceau (1957) in order to overcome this problem. This technique is now implemented with the aid of a Large Magnetic Suspension and Balance System (LMSBS) and advanced position sensors for measuring model attitude and position within the test section. Two different optical position sensors are discussed, taking into account a device based on the use of linear CCD arrays, and a device utilizing area CID cameras. Current techniques in image processing have been employed to develop target tracking algorithms capable of subpixel resolution for the sensors. The algorithms are discussed in detail, and some preliminary test results are reported.

  8. Sensor Alerting Capability

    NASA Astrophysics Data System (ADS)

    Henriksson, Jakob; Bermudez, Luis; Satapathy, Goutam

    2013-04-01

    There is a large amount of sensor data generated today by various sensors, from in-situ buoys to mobile underwater gliders. Providing sensor data to the users through standardized services, language and data model is the promise of OGC's Sensor Web Enablement (SWE) initiative. As the amount of data grows it is becoming difficult for data providers, planners and managers to ensure reliability of data and services and to monitor critical data changes. Intelligent Automation Inc. (IAI) is developing a net-centric alerting capability to address these issues. The capability is built on Sensor Observation Services (SOSs), which is used to collect and monitor sensor data. The alerts can be configured at the service level and at the sensor data level. For example it can alert for irregular data delivery events or a geo-temporal statistic of sensor data crossing a preset threshold. The capability provides multiple delivery mechanisms and protocols, including traditional techniques such as email and RSS. With this capability decision makers can monitor their assets and data streams, correct failures or be alerted about a coming phenomena.

  9. Combining Radar and Optical Data for Forest Disturbance Studies

    NASA Technical Reports Server (NTRS)

    Ranson, K. Jon; Smith, David E. (Technical Monitor)

    2002-01-01

    Disturbance is an important factor in determining the carbon balance and succession of forests. Until the early 1990's researchers have focused on using optical or thermal sensors to detect and map forest disturbances from wild fires, logging or insect outbreaks. As part of a NASA Siberian mapping project, a study evaluated the capability of three different radar sensors (ERS, JERS and Radarsat) and an optical sensor (Landsat 7) to detect fire scars, logging and insect damage in the boreal forest. This paper describes the data sets and techniques used to evaluate the use of remote sensing to detect disturbance in central Siberian forests. Using images from each sensor individually and combined an assessment of the utility of using these sensors was developed. Transformed Divergence analysis and maximum likelihood classification revealed that Landsat data was the single best data type for this purpose. However, the combined use of the three radar and optical sensors did improve the results of discriminating these disturbances.

  10. Parallel-multiplexed excitation light-sheet microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xu, Dongli; Zhou, Weibin; Peng, Leilei

    2017-02-01

    Laser scanning light-sheet imaging allows fast 3D image of live samples with minimal bleach and photo-toxicity. Existing light-sheet techniques have very limited capability in multi-label imaging. Hyper-spectral imaging is needed to unmix commonly used fluorescent proteins with large spectral overlaps. However, the challenge is how to perform hyper-spectral imaging without sacrificing the image speed, so that dynamic and complex events can be captured live. We report wavelength-encoded structured illumination light sheet imaging (λ-SIM light-sheet), a novel light-sheet technique that is capable of parallel multiplexing in multiple excitation-emission spectral channels. λ-SIM light-sheet captures images of all possible excitation-emission channels in true parallel. It does not require compromising the imaging speed and is capable of distinguish labels by both excitation and emission spectral properties, which facilitates unmixing fluorescent labels with overlapping spectral peaks and will allow more labels being used together. We build a hyper-spectral light-sheet microscope that combined λ-SIM with an extended field of view through Bessel beam illumination. The system has a 250-micron-wide field of view and confocal level resolution. The microscope, equipped with multiple laser lines and an unlimited number of spectral channels, can potentially image up to 6 commonly used fluorescent proteins from blue to red. Results from in vivo imaging of live zebrafish embryos expressing various genetic markers and sensors will be shown. Hyper-spectral images from λ-SIM light-sheet will allow multiplexed and dynamic functional imaging in live tissue and animals.

  11. Terahertz standoff imaging testbed design and performance for concealed weapon and device identification model development

    NASA Astrophysics Data System (ADS)

    Franck, Charmaine C.; Lee, Dave; Espinola, Richard L.; Murrill, Steven R.; Jacobs, Eddie L.; Griffin, Steve T.; Petkie, Douglas T.; Reynolds, Joe

    2007-04-01

    This paper describes the design and performance of the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate's (NVESD), active 0.640-THz imaging testbed, developed in support of the Defense Advanced Research Project Agency's (DARPA) Terahertz Imaging Focal-Plane Technology (TIFT) program. The laboratory measurements and standoff images were acquired during the development of a NVESD and Army Research Laboratory terahertz imaging performance model. The imaging testbed is based on a 12-inch-diameter Off-Axis Elliptical (OAE) mirror designed with one focal length at 1 m and the other at 10 m. This paper will describe the design considerations of the OAE-mirror, dual-capability, active imaging testbed, as well as measurement/imaging results used to further develop the model.

  12. Multispectral imaging with vertical silicon nanowires

    PubMed Central

    Park, Hyunsung; Crozier, Kenneth B.

    2013-01-01

    Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye. PMID:23955156

  13. Tests of monolithic active pixel sensors at national synchrotron light source

    NASA Astrophysics Data System (ADS)

    Deptuch, G.; Besson, A.; Carini, G. A.; Siddons, D. P.; Szelezniak, M.; Winter, M.

    2007-01-01

    The paper discusses basic characterization of Monolithic Active Pixel Sensors (MAPS) carried out at the X12A beam-line at National Synchrotron Light Source (NSLS), Upton, NY, USA. The tested device was a MIMOSA V (MV) chip, back-thinned down to the epitaxial layer. This 1M pixels device features a pixel size of 17×17 μm2 and was designed in a 0.6 μm CMOS process. The X-ray beam energies used range from 5 to 12 keV. Examples of direct X-ray imaging capabilities are presented.

  14. Fuzzy logic control for camera tracking system

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  15. Redundancy Analysis of Capacitance Data of a Coplanar Electrode Array for Fast and Stable Imaging Processing

    PubMed Central

    Wen, Yintang; Zhang, Zhenda; Zhang, Yuyan; Sun, Dongtao

    2017-01-01

    A coplanar electrode array sensor is established for the imaging of composite-material adhesive-layer defect detection. The sensor is based on the capacitive edge effect, which leads to capacitance data being considerably weak and susceptible to environmental noise. The inverse problem of coplanar array electrical capacitance tomography (C-ECT) is ill-conditioning, in which a small error of capacitance data can seriously affect the quality of reconstructed images. In order to achieve a stable image reconstruction process, a redundancy analysis method for capacitance data is proposed. The proposed method is based on contribution rate and anti-interference capability. According to the redundancy analysis, the capacitance data are divided into valid and invalid data. When the image is reconstructed by valid data, the sensitivity matrix needs to be changed accordingly. In order to evaluate the effectiveness of the sensitivity map, singular value decomposition (SVD) is used. Finally, the two-dimensional (2D) and three-dimensional (3D) images are reconstructed by the Tikhonov regularization method. Through comparison of the reconstructed images of raw capacitance data, the stability of the image reconstruction process can be improved, and the quality of reconstructed images is not degraded. As a result, much invalid data are not collected, and the data acquisition time can also be reduced. PMID:29295537

  16. Room-temperature bonding of epitaxial layer to carbon-cluster ion-implanted silicon wafers for CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Koga, Yoshihiro; Kadono, Takeshi; Shigematsu, Satoshi; Hirose, Ryo; Onaka-Masada, Ayumi; Okuyama, Ryousuke; Okuda, Hidehiko; Kurita, Kazunari

    2018-06-01

    We propose a fabrication process for silicon wafers by combining carbon-cluster ion implantation and room-temperature bonding for advanced CMOS image sensors. These carbon-cluster ions are made of carbon and hydrogen, which can passivate process-induced defects. We demonstrated that this combination process can be used to form an epitaxial layer on a carbon-cluster ion-implanted Czochralski (CZ)-grown silicon substrate with a high dose of 1 × 1016 atoms/cm2. This implantation condition transforms the top-surface region of the CZ-grown silicon substrate into a thin amorphous layer. Thus, an epitaxial layer cannot be grown on this implanted CZ-grown silicon substrate. However, this combination process can be used to form an epitaxial layer on the amorphous layer of this implanted CZ-grown silicon substrate surface. This bonding wafer has strong gettering capability in both the wafer-bonding region and the carbon-cluster ion-implanted projection range. Furthermore, this wafer inhibits oxygen out-diffusion to the epitaxial layer from the CZ-grown silicon substrate after device fabrication. Therefore, we believe that this bonding wafer is effective in decreasing the dark current and white-spot defect density for advanced CMOS image sensors.

  17. EO-1 analysis applicable to coastal characterization

    NASA Astrophysics Data System (ADS)

    Burke, Hsiao-hua K.; Misra, Bijoy; Hsu, Su May; Griffin, Michael K.; Upham, Carolyn; Farrar, Kris

    2003-09-01

    The EO-1 satellite is part of NASA's New Millennium Program (NMP). It consists of three imaging sensors: the multi-spectral Advanced Land Imager (ALI), Hyperion and Atmospheric Corrector. Hyperion provides a high-resolution hyperspectral imager capable of resolving 220 spectral bands (from 0.4 to 2.5 micron) with a 30 m resolution. The instrument images a 7.5 km by 100 km land area per image. Hyperion is currently the only space-borne HSI data source since the launch of EO-1 in late 2000. The discussion begins with the unique capability of hyperspectral sensing to coastal characterization: (1) most ocean feature algorithms are semi-empirical retrievals and HSI has all spectral bands to provide legacy with previous sensors and to explore new information, (2) coastal features are more complex than those of deep ocean that coupled effects are best resolved with HSI, and (3) with contiguous spectral coverage, atmospheric compensation can be done with more accuracy and confidence, especially since atmospheric aerosol effects are the most pronounced in the visible region where coastal feature lie. EO-1 data from Chesapeake Bay from 19 February 2002 are analyzed. In this presentation, it is first illustrated that hyperspectral data inherently provide more information for feature extraction than multispectral data despite Hyperion has lower SNR than ALI. Chlorophyll retrievals are also shown. The results compare favorably with data from other sources. The analysis illustrates the potential value of Hyperion (and HSI in general) data to coastal characterization. Future measurement requirements (air borne and space borne) are also discussed.

  18. The applicability of frame imaging from a spinning spacecraft. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Johnson, R. O.; Wallmark, G. N.

    1973-01-01

    A detailed study was made of frame-type imaging systems for use on board a spin stabilized spacecraft for outer planets applications. All types of frame imagers capable of performing this mission were considered, regardless of the current state of the art. Detailed sensor models of these systems were developed at the component level and used in the subsequent analyses. An overall assessment was then made of the various systems based upon results of a worst-case performance analysis, foreseeable technology problems, and the relative reliability and radiation tolerance of the systems. Special attention was directed at restraints imposed by image motion and the limited data transmission and storage capability of the spacecraft. Based upon this overall assessment, the most promising systems were selected and then examined in detail for a specified Jupiter orbiter mission. The relative merits of each selected system were then analyzed, and the system design characteristics were demonstrated using preliminary configurations, block diagrams, and tables of estimated weights, volumes and power consumption.

  19. Coral Reef Remote Sensing Using Simulated VIIRS and LDCM Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.; Blonski, Slawomir; Moore, Roxzana

    2008-01-01

    The Rapid Prototyping Capability (RPC) node at NASA Stennis Space Center, MS, was used to simulate NASA next-generation sensor imagery over well-known coral reef areas: Looe Key, FL, and Kaneohe Bay, HI. The objective was to assess the degree to which next-generation sensor systems-the Visible/Infrared Imager/Radiometer Suite (VIIRS) and the Landsat Data Continuity Mission (LDCM)- might provide key input to the National Oceanographic and Atmospheric Administration (NOAA) Integrated Coral Observing Network (ICON)/Coral Reef Early Warning System (CREWS) Decision Support Tool (DST). The DST data layers produced from the simulated imagery concerned water quality and benthic classification map layers. The water optical parameters of interest were chlorophyll (Chl) and the absorption coefficient (a). The input imagery used by the RPC for simulation included spaceborne (Hyperion) and airborne (AVIRIS) hyperspectral data. Specific field data to complement and aid in validation of the overflight data was used when available. The results of the experiment show that the next-generation sensor systems are capable of providing valuable data layer resources to NOAA s ICON/CREWS DST.

  20. Coral Reef Remote Sensing using Simulated VIIRS and LDCM Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.

    2007-01-01

    The Rapid Prototyping Capability (RPC) node at NASA Stennis Space Center, MS, was used to simulate NASA next-generation sensor imagery over well-known coral reef areas: Looe Key, FL, and Kaneohe Bay, HI. The objective was to assess the degree to which next-generation sensor systems the Visible/Infrared Imager/Radiometer Suite (VIIRS) and the Landsat Data Continuity Mission (LDCM) might provide key input to the National Oceanographic and Atmospheric Administration (NOAA) Integrated Coral Observing Network (ICON)/Coral Reef Early Warning System (CREWS) Decision Support Tool (DST). The DST data layers produced from the simulated imagery concerned water quality and benthic classification map layers. The water optical parameters of interest were chlorophyll (Chl) and the absorption coefficient (a). The input imagery used by the RPC for simulation included spaceborne (Hyperion) and airborne (AVIRIS) hyperspectral data. Specific field data to complement and aid in validation of the overflight data was used when available. The results of the experiment show that the next-generation sensor systems are capable of providing valuable data layer resources to NOAA's ICON/CREWS DST.

  1. Design and implementation of modular home security system with short messaging system

    NASA Astrophysics Data System (ADS)

    Budijono, Santoso; Andrianto, Jeffri; Axis Novradin Noor, Muhammad

    2014-03-01

    Today we are living in 21st century where crime become increasing and everyone wants to secure they asset at their home. In that situation user must have system with advance technology so person do not worry when getting away from his home. It is therefore the purpose of this design to provide home security device, which send fast information to user GSM (Global System for Mobile) mobile device using SMS (Short Messaging System) and also activate - deactivate system by SMS. The Modular design of this Home Security System make expandable their capability by add more sensors on that system. Hardware of this system has been designed using microcontroller AT Mega 328, PIR (Passive Infra Red) motion sensor as the primary sensor for motion detection, camera for capturing images, GSM module for sending and receiving SMS and buzzer for alarm. For software this system using Arduino IDE for Arduino and Putty for testing connection programming in GSM module. This Home Security System can monitor home area that surrounding by PIR sensor and sending SMS, save images capture by camera, and make people panic by turn on the buzzer when trespassing surrounding area that detected by PIR sensor. The Modular Home Security System has been tested and succeed detect human movement.

  2. SSUSI-lite: next generation far-ultraviolet sensor for characterizing geospace

    NASA Astrophysics Data System (ADS)

    Paxton, Larry J.; Hicks, John E.; Grey, Matthew P.; Parker, Charles W.; Hourani, Ramsay S.; Marcotte, Kathryn M.; Carlsson, Uno P.; Kerem, Samuel; Osterman, Steven N.; Maas, Bryan J.; Ogorzalek, Bernard S.

    2016-10-01

    SSUSI-Lite is an update of an existing sensor, SSUSI. The current generation of Defense Meteorological Satellite Program (DMSP) satellites (Block 5D3) includes a hyperspectral, cross-tracking imaging spectrograph known as the Special Sensor Ultraviolet Spectrographic Imager (SSUSI). SSUSI has been part of the DMSP program since 1990. SSUSI is designed to provide space weather information such as: auroral imagery, ionospheric electron density profiles, and neutral density composition changes. The sensors that are flying today (see http://ssusi.jhuapl.edu) were designed in 1990 - 1992. There have been some significant improvements in flight hardware since then. The SSUSI-Lite instrument is more capable than SSUSI yet consumes ½ the power and is ½ the mass. The total package count (and as a consequence, integration cost and difficulty) was reduced from 7 to 2. The scan mechanism was redesigned and tested and is a factor of 10 better. SSUSI-Lite can be flown as a hosted payload or a rideshare - it only needs about 10 watts and weighs under 10 kg. We will show results from tests of an interesting intensified position sensitive anode pulse counting detector system. We use this approach because the SSUSI sensor operates in the far ultraviolet - from about 110 to 180 nm or 0.11 to 0.18 microns.

  3. Towards a framework for agent-based image analysis of remote-sensing data

    PubMed Central

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-01-01

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916

  4. Towards a framework for agent-based image analysis of remote-sensing data.

    PubMed

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  5. Forty-Year Calibrated Record of Earth-Surface Reflected Radiance from Landsat: A Review

    NASA Technical Reports Server (NTRS)

    Markham, Brian; Helder, Dennis

    2011-01-01

    Sensors on Landsat satellites have been collecting images of the Earth's surface for nearly 40 years. These images have been invaluable for characterizing and detecting changes in the land cover and land use of the world. Although initially conceived as primarily picture generating sensors, even the early sensors were radiometrically calibrated and spectrally characterized prior to launch and incorporated some capabilities to monitor their radiometric calibration once on orbit. Recently, as the focus of studies has shifted to monitoring Earth surface parameters over significant periods of time, serious attention has been focused toward bringing the data from all these sensors onto a common radiometric scale over this 40-year period. This effort started with the most recent systems and then was extended back in time. Landsat-7 ETM+, the best-characterized sensor of the series prior to launch and once on orbit, and the most stable system to date, was chosen to serve as the reference. The Landsat-7 project was the first of the series to build an image assessment system into its ground system, allowing systematic characterization of its sensors and data. Second, the Landsat-5 TM (still operating at the time of the Landsat-7 launch and continues to operate) calibration history was reconstructed based on its internal calibrator, vicarious calibrations, pseudo-invariant sites and a tie to Landsat-7 ETM+ at the time of the commissioning of Landsat-7. This process was performed in two iterations: the earlier one relied primarily on the TM internal calibrator. When this was found to have some deficiencies, a revised calibration was based more on pseudo-invariant sites, though the internal calibrator was still used to establish the short-term variations in response due to icing build up on the cold focal plane. As time progressed, a capability to monitor the Landsat-5 TM was added to the image assessment system. The Landsat-4 TM, which operated from 1982-1992, was the third system to which the radiometric scale was extended. The limited and broken use of the Landsat-4 TM made this analysis more difficult. Eight-day separated image pairs from Landsat-5 combined with analysis of pseudo invariant sites established this history. The fourth and most challenging effort was making the Landsat-1 to -5 MSS sensors' data internally radiometrically consistent. This effort was particularly complicated by the age of the MSS data, varying formats and processing levels in the archive, limited datasets, and limited documentation available. Ultimately, pseudo-invariant sites were identified in North America and used for this effort. Note that most of the Landsat-MSS archived data had already been calibrated using the MSS internal calibrators, so this processing was imbedded in the result. The final effort was developing an absolute scale for Landsat MSS similar to what was already established for the "TM" sensors. This was accomplished by using simultaneous data from Landsat-5 MSS and Landsat-5 TM, accounting for spectral differences between the sensors using EO-1 Hyperion data. The recalibrated history of the Landsat data and implications to users are discussed. The key result from this work is a consistently calibrated Landsat data archive that spans nearly 40 years with total uncertainties on the order of 10% or less for most sensors and bands.

  6. Detecting ship targets in spaceborne infrared image based on modeling radiation anomalies

    NASA Astrophysics Data System (ADS)

    Wang, Haibo; Zou, Zhengxia; Shi, Zhenwei; Li, Bo

    2017-09-01

    Using infrared imaging sensors to detect ship target in the ocean environment has many advantages compared to other sensor modalities, such as better thermal sensitivity and all-weather detection capability. We propose a new ship detection method by modeling radiation anomalies for spaceborne infrared image. The proposed method can be decomposed into two stages, where in the first stage, a test infrared image is densely divided into a set of image patches and the radiation anomaly of each patch is estimated by a Gaussian Mixture Model (GMM), and thereby target candidates are obtained from anomaly image patches. In the second stage, target candidates are further checked by a more discriminative criterion to obtain the final detection result. The main innovation of the proposed method is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous patches among complex background. The experimental result on short wavelength infrared band (1.560 - 2.300 μm) and long wavelength infrared band (10.30 - 12.50 μm) of Landsat-8 satellite shows the proposed method achieves a desired ship detection accuracy with higher recall than other classical ship detection methods.

  7. Evaluation of fingerprint deformation using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Gutierrez da Costa, Henrique S.; Maxey, Jessica R.; Silva, Luciano; Ellerbee, Audrey K.

    2014-02-01

    Biometric identification systems have important applications to privacy and security. The most widely used of these, print identification, is based on imaging patterns present in the fingers, hands and feet that are formed by the ridges, valleys and pores of the skin. Most modern print sensors acquire images of the finger when pressed against a sensor surface. Unfortunately, this pressure may result in deformations, characterized by changes in the sizes and relative distances of the print patterns, and such changes have been shown to negatively affect the performance of fingerprint identification algorithms. Optical coherence tomography (OCT) is a novel imaging technique that is capable of imaging the subsurface of biological tissue. Hence, OCT may be used to obtain images of subdermal skin structures from which one can extract an internal fingerprint. The internal fingerprint is very similar in structure to the commonly used external fingerprint and is of increasing interest in investigations of identify fraud. We proposed and tested metrics based on measurements calculated from external and internal fingerprints to evaluate the amount of deformation of the skin. Such metrics were used to test hypotheses about the differences of deformation between the internal and external images, variations with the type of finger and location inside the fingerprint.

  8. High-speed Imaging of Global Surface Temperature Distributions on Hypersonic Ballistic-Range Projectiles

    NASA Technical Reports Server (NTRS)

    Wilder, Michael C.; Reda, Daniel C.

    2004-01-01

    The NASA-Ames ballistic range provides a unique capability for aerothermodynamic testing of configurations in hypersonic, real-gas, free-flight environments. The facility can closely simulate conditions at any point along practically any trajectory of interest experienced by a spacecraft entering an atmosphere. Sub-scale models of blunt atmospheric entry vehicles are accelerated by a two-stage light-gas gun to speeds as high as 20 times the speed of sound to fly ballistic trajectories through an 24 m long vacuum-rated test section. The test-section pressure (effective altitude), the launch velocity of the model (flight Mach number), and the test-section working gas (planetary atmosphere) are independently variable. The model travels at hypersonic speeds through a quiescent test gas, creating a strong bow-shock wave and real-gas effects that closely match conditions achieved during actual atmospheric entry. The challenge with ballistic range experiments is to obtain quantitative surface measurements from a model traveling at hypersonic speeds. The models are relatively small (less than 3.8 cm in diameter), which limits the spatial resolution possible with surface mounted sensors. Furthermore, since the model is in flight, surface-mounted sensors require some form of on-board telemetry, which must survive the massive acceleration loads experienced during launch (up to 500,000 gravities). Finally, the model and any on-board instrumentation will be destroyed at the terminal wall of the range. For these reasons, optical measurement techniques are the most practical means of acquiring data. High-speed thermal imaging has been employed in the Ames ballistic range to measure global surface temperature distributions and to visualize the onset of transition to turbulent-flow on the forward regions of hypersonic blunt bodies. Both visible wavelength and infrared high-speed cameras are in use. The visible wavelength cameras are intensified CCD imagers capable of integration times as short as 2 ns. The infrared camera uses an Indium Antimonide (InSb) sensor in the 3 to 5 micron band and is capable of integration times as short as 500 ns. The projectiles are imaged nearly head-on using expendable mirrors offset slightly from the flight path. The proposed paper will discuss the application of high-speed digital imaging systems in the NASA-Ames hypersonic ballistic range, and the challenges encountered when applying these systems. Example images of the thermal radiation from the blunt nose of projectiles flying at nearly 14 times the speed of sound will be given.

  9. Infrared sensors and systems for enhanced vision/autonomous landing applications

    NASA Technical Reports Server (NTRS)

    Kerr, J. Richard

    1993-01-01

    There exists a large body of data spanning more than two decades, regarding the ability of infrared imagers to 'see' through fog, i.e., in Category III weather conditions. Much of this data is anecdotal, highly specialized, and/or proprietary. In order to determine the efficacy and cost effectiveness of these sensors under a variety of climatic/weather conditions, there is a need for systematic data spanning a significant range of slant-path scenarios. These data should include simultaneous video recordings at visible, midwave (3-5 microns), and longwave (8-12 microns) wavelengths, with airborne weather pods that include the capability of determining the fog droplet size distributions. Existing data tend to show that infrared is more effective than would be expected from analysis and modeling. It is particularly more effective for inland (radiation) fog as compared to coastal (advection) fog, although both of these archetypes are oversimplifications. In addition, as would be expected from droplet size vs wavelength considerations, longwave outperforms midwave, in many cases by very substantial margins. Longwave also benefits from the higher level of available thermal energy at ambient temperatures. The principal attraction of midwave sensors is that staring focal plane technology is available at attractive cost-performance levels. However, longwave technology such as that developed at FLIR Systems, Inc. (FSI), has achieved high performance in small, economical, reliable imagers utilizing serial-parallel scanning techniques. In addition, FSI has developed dual-waveband systems particularly suited for enhanced vision flight testing. These systems include a substantial, embedded processing capability which can perform video-rate image enhancement and multisensor fusion. This is achieved with proprietary algorithms and includes such operations as real-time histograms, convolutions, and fast Fourier transforms.

  10. University of Virginia suborbital infrared sensing experiment

    NASA Astrophysics Data System (ADS)

    Holland, Stephen; Nunnally, Clayton; Armstrong, Sarah; Laufer, Gabriel

    2002-03-01

    An Orion sounding rocket launched from Wallops Flight Facility carried a University of Virginia payload to an altitude of 47 km and returned infrared measurements of the Earth's upper atmosphere and video images of the ocean. The payload launch was the result of a three-year undergraduate design project by a multi-disciplinary student group from the University of Virginia and James Madison University. As part of a new multi-year design course, undergraduate students designed, built, tested, and participated in the launch of a suborbital platform from which atmospheric remote sensors and other scientific experiments could operate. The first launch included a simplified atmospheric measurement system intended to demonstrate full system operation and remote sensing capabilities during suborbital flight. A thermoelectrically cooled HgCdTe infrared detector, with peak sensitivity at 10 micrometers , measured upwelling radiation and a small camera and VCR system, aligned with the infrared sensor, provided a ground reference. Additionally, a simple orientation sensor, consisting of three photodiodes, equipped with red, green, and blue light with dichroic filters, was tested. Temperature measurements of the upper atmosphere were successfully obtained during the flight. Video images were successfully recorded on-board the payload and proved a valuable tool in the data analysis process. The photodiode system, intended as a replacement for the camera and VCR system, functioned well, despite low signal amplification. This fully integrated and flight tested payload will serve as a platform for future atmospheric sensing experiments. It is currently being modified for a second suborbital flight that will incorporate a gas filter correlation radiometry (GFCR) instrument to measure the distribution of stratospheric methane and imaging capabilities to record the chlorophyll distribution in the Metompkin Bay as an indicator of pollution runoff.

  11. Improved Space Object Observation Techniques Using CMOS Detectors

    NASA Astrophysics Data System (ADS)

    Schildknecht, T.; Hinze, A.; Schlatter, P.; Silha, J.; Peltonen, J.; Santti, T.; Flohrer, T.

    2013-08-01

    CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contain their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration, and the potential to perform image processing operations on-chip and in real-time. Presently applied and proposed optical observation strategies for space debris surveys and space surveillance applications had to be analyzed. The major design drivers were identified and potential benefits from using available and future CMOS sensors were assessed. The major challenges and design drivers for ground-based and space-based optical observation strategies have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Similarly, the desirable on-chip processing functionalities which would further enhance the object detection and image segmentation were identified. Finally, the characteristics of a particular CMOS sensor available at the Zimmerwald observatory were analyzed by performing laboratory test measurements.

  12. Shuttle Imaging Radar-A (SIR-A) experiment

    NASA Technical Reports Server (NTRS)

    Elachi, C. (Editor); Cimino, J. B. (Editor)

    1982-01-01

    The SIR-A experiment was conducted in order to acquire radar data over a variety of regions to further understanding of the radar signatures of various geologic features. The capability of the Shuttle as a scientific platform for observation of the Earth's resources was assessed. The SIR-A sensor operated nominally and the full data acquisition capacity of the optical recorder was used.

  13. End-to-end remote sensing at the Science and Technology Laboratory of John C. Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Kelly, Patrick; Rickman, Douglas; Smith, Eric

    1991-01-01

    The Science and Technology Laboratory (STL) of Stennis Space Center (SSC) was developing an expertise in remote sensing for more than a decade. Capabilities at SSC/STL include all major areas of the field. STL includes the Sensor Development Laboratory (SDL), Image Processing Center, a Learjet 23 flight platform, and on-staff scientific investigators.

  14. An adaptive optics approach for laser beam correction in turbulence utilizing a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2015-09-01

    Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n <10-13 m- 2/3). An intelligent correction algorithm can then be developed to reconstruct the perturbed wavefront and use this information to drive a deformable mirror capable of correcting the major distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.

  15. Atomic force microscopy capable of vibration isolation with low-stiffness Z-axis actuation.

    PubMed

    Ito, Shingo; Schitter, Georg

    2018-03-01

    For high-resolution imaging without bulky external vibration isolation, this paper presents an atomic force microscope (AFM) capable of vibration isolation with its internal Z-axis (vertical) actuators moving the AFM probe. Lorentz actuators (voice coil actuators) are used for the Z-axis actuation, and flexures guiding the motion are designed to have a low stiffness between the mover and the base. The low stiffness enables a large Z-axis actuation of more than 700 µm and mechanically isolates the probe from floor vibrations at high frequencies. To reject the residual vibrations, the probe tracks the sample by using a displacement sensor for feedback control. Unlike conventional AFMs, the Z-axis actuation attains a closed-loop control bandwidth that is 35 times higher than the first mechanical resonant frequency. The closed-loop AFM system has robustness against the flexures' nonlinearity and uses the first resonance for better sample tracking. For further improvement, feedforward control with a vibration sensor is combined, and the resulting system rejects 98.4% of vibrations by turning on the controllers. The AFM system is demonstrated by successful AFM imaging in a vibrational environment. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Advances in detection of diffuse seafloor venting using structured light imaging.

    NASA Astrophysics Data System (ADS)

    Smart, C.; Roman, C.; Carey, S.

    2016-12-01

    Systematic, remote detection and high resolution mapping of low temperature diffuse hydrothermal venting is inefficient and not currently tractable using traditional remotely operated vehicle (ROV) mounted sensors. Preliminary results for hydrothermal vent detection using a structured light laser sensor were presented in 2011 and published in 2013 (Smart) with continual advancements occurring in the interim. As the structured light laser passes over active venting, the projected laser line effectively blurs due to the associated turbulence and density anomalies in the vent fluid. The degree laser disturbance is captured by a camera collecting images of the laser line at 20 Hz. Advancements in the detection of the laser and fluid interaction have included extensive normalization of the collected laser data and the implementation of a support vector machine algorithm to develop a classification routine. The image data collected over a hydrothermal vent field is then labeled as seafloor, bacteria or a location of venting. The results can then be correlated with stereo images, bathymetry and backscatter data. This sensor is a component of an ROV mounted imaging suite which also includes stereo cameras and a multibeam sonar system. Originally developed for bathymetric mapping, the structured light laser sensor, and other imaging suite components, are capable of creating visual and bathymetric maps with centimeter level resolution. Surveys are completed in a standard mowing the lawn pattern completing a 30m x 30m survey with centimeter level resolution in under an hour. Resulting co-registered data includes, multibeam and structured light laser bathymetry and backscatter, stereo images and vent detection. This system allows for efficient exploration of areas with diffuse and small point source hydrothermal venting increasing the effectiveness of scientific sampling and observation. Recent vent detection results collected during the 2013-2015 E/V Nautilus seasons will be presented. Smart, C. J. and Roman, C. and Carey, S. N. (2013) Detection of diffuse seafloor venting using structured light imaging, Geochemistry, Geophysics, Geosystems, 14, 4743-4757

  17. First results on DEPFET Active Pixel Sensors fabricated in a CMOS foundry—a promising approach for new detector development and scientific instrumentation

    NASA Astrophysics Data System (ADS)

    Aschauer, S.; Majewski, P.; Lutz, G.; Soltau, H.; Holl, P.; Hartmann, R.; Schlosser, D.; Paschen, U.; Weyers, S.; Dreiner, S.; Klusmann, M.; Hauser, J.; Kalok, D.; Bechteler, A.; Heinzinger, K.; Porro, M.; Titze, B.; Strüder, L.

    2017-11-01

    DEPFET Active Pixel Sensors (APS) have been introduced as focal plane detectors for X-ray astronomy already in 1996. Fabricated on high resistivity, fully depleted silicon and back-illuminated they can provide high quantum efficiency and low noise operation even at very high read rates. In 2009 a new type of DEPFET APS, the DSSC (DEPFET Sensor with Signal Compression) was developed, which is dedicated to high-speed X-ray imaging at the European X-ray free electron laser facility (EuXFEL) in Hamburg. In order to resolve the enormous contrasts occurring in Free Electron Laser (FEL) experiments, this new DSSC-DEPFET sensor has the capability of nonlinear amplification, that is, high gain for low intensities in order to obtain single-photon detection capability, and reduced gain for high intensities to achieve high dynamic range for several thousand photons per pixel and frame. We call this property "signal compression". Starting in 2015, we have been fabricating DEPFET sensors in an industrial scale CMOS foundry maintaining the outstanding proven DEPFET properties and adding new capabilities due to the industrial-scale CMOS process. We will highlight these additional features and describe the progress achieved so far. In a first attempt on double-sided polished 725 μm thick 200 mm high resistivity float zone silicon wafers all relevant device related properties have been measured, such as leakage current, depletion voltage, transistor characteristics, noise and energy resolution for X-rays and the nonlinear response. The smaller feature size provided by the new technology allows for an advanced design and significant improvements in device performance. A brief summary of the present status will be given as well as an outlook on next steps and future perspectives.

  18. Wideband optical sensing using pulse interferometry.

    PubMed

    Rosenthal, Amir; Razansky, Daniel; Ntziachristos, Vasilis

    2012-08-13

    Advances in fabrication of high-finesse optical resonators hold promise for the development of miniaturized, ultra-sensitive, wide-band optical sensors, based on resonance-shift detection. Many potential applications are foreseen for such sensors, among them highly sensitive detection in ultrasound and optoacoustic imaging. Traditionally, sensor interrogation is performed by tuning a narrow linewidth laser to the resonance wavelength. Despite the ubiquity of this method, its use has been mostly limited to lab conditions due to its vulnerability to environmental factors and the difficulty of multiplexing - a key factor in imaging applications. In this paper, we develop a new optical-resonator interrogation scheme based on wideband pulse interferometry, potentially capable of achieving high stability against environmental conditions without compromising sensitivity. Additionally, the method can enable multiplexing several sensors. The unique properties of the pulse-interferometry interrogation approach are studied theoretically and experimentally. Methods for noise reduction in the proposed scheme are presented and experimentally demonstrated, while the overall performance is validated for broadband optical detection of ultrasonic fields. The achieved sensitivity is equivalent to the theoretical limit of a 6 MHz narrow-line width laser, which is 40 times higher than what can be usually achieved by incoherent interferometry for the same optical resonator.

  19. Real-time scene and signature generation for ladar and imaging sensors

    NASA Astrophysics Data System (ADS)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  20. Improved detection and false alarm rejection for chemical vapors using passive hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Marinelli, William J.; Miyashiro, Rex; Gittins, Christopher M.; Konno, Daisei; Chang, Shing; Farr, Matt; Perkins, Brad

    2013-05-01

    Two AIRIS sensors were tested at Dugway Proving Grounds against chemical agent vapor simulants. The primary objectives of the test were to: 1) assess performance of algorithm improvements designed to reduce false alarm rates with a special emphasis on solar effects, and 3) evaluate performance in target detection at 5 km. The tests included 66 total releases comprising alternating 120 kg glacial acetic acid (GAA) and 60 kg triethyl phosphate (TEP) events. The AIRIS sensors had common algorithms, detection thresholds, and sensor parameters. The sensors used the target set defined for the Joint Service Lightweight Chemical Agent Detector (JSLSCAD) with TEP substituted for GA and GAA substituted for VX. They were exercised at two sites located at either 3 km or 5 km from the release point. Data from the tests will be presented showing that: 1) excellent detection capability was obtained at both ranges with significantly shorter alarm times at 5 km, 2) inter-sensor comparison revealed very comparable performance, 3) false alarm rates < 1 incident per 10 hours running time over 143 hours of sensor operations were achieved, 4) algorithm improvements eliminated both solar and cloud false alarms. The algorithms enabling the improved false alarm rejection will be discussed. The sensor technology has recently been extended to address the problem of detection of liquid and solid chemical agents and toxic industrial chemical on surfaces. The phenomenology and applicability of passive infrared hyperspectral imaging to this problem will be discussed and demonstrated.

  1. Rapid immuno-analytical system physically integrated with lens-free CMOS image sensor for food-borne pathogens.

    PubMed

    Jeon, Jin-Woo; Kim, Jee-Hyun; Lee, Jong-Mook; Lee, Won-Ho; Lee, Do-Young; Paek, Se-Hwan

    2014-02-15

    To realize an inexpensive, pocket-sized immunosensor system, a rapid test devise based on cross-flow immuno-chromatography was physically combined with a lens-free CMOS image sensor (CIS), which was then applied to the detection of the food-borne pathogen, Salmonella typhimurium (S. typhimurium). Two CISs, each retaining 1.3 mega pixel array, were mounted on a printed circuit board to fabricate a disposable sensing module, being connectable with a signal detection system. For the bacterial analysis, a cellulose membrane-based immunosensing platform, ELISA-on-a-chip (EOC), was employed, being integrated with the CIS module, and the antigen-antibody reaction sites were aligned with the respective sensor. In such sensor construction, the chemiluminescent signals produced from the EOC are transferred directly into the sensors and are converted to electric signals on the detector. The EOC-CIS integrated sensor was capable of detecting a traceable amount of the bacterium (4.22 × 10(3)CFU/mL), nearly comparable to that adopting a sophisticated detector such as cooled-charge-coupled device, while having greatly reduced dimensions and cost. Upon coupling with immuno-magnetic separation, the sensor showed an additional 67-fold enhancement in the detection limit. Furthermore, a real sample test was carried out for fish muscles inoculated with a sample of 3.3CFU S. typhimurium per 10 g, which was able to be detected earlier than 6h after the onset of pre-enrichment by culture. © 2013 Elsevier B.V. All rights reserved.

  2. Photonic crystal resonances for sensing and imaging

    NASA Astrophysics Data System (ADS)

    Pitruzzello, Giampaolo; Krauss, Thomas F.

    2018-07-01

    This review provides an insight into the recent developments of photonic crystal (PhC)-based devices for sensing and imaging, with a particular emphasis on biosensors. We focus on two main classes of devices, namely sensors based on PhC cavities and those on guided mode resonances (GMRs). This distinction is able to capture the richness of possibilities that PhCs are able to offer in this space. We present recent examples highlighting applications where PhCs can offer new capabilities, open up new applications or enable improved performance, with a clear emphasis on the different types of structures and photonic functions. We provide a critical comparison between cavity-based devices and GMR devices by highlighting strengths and weaknesses. We also compare PhC technologies and their sensing mechanism to surface plasmon resonance, microring resonators and integrated interferometric sensors.

  3. Census Cities Project and atlas of urban and regional change

    NASA Technical Reports Server (NTRS)

    Wray, J. R.

    1970-01-01

    The research design and imagery utilization for urban applications of remote sensing are reviewed, including the combined use of sensor and census data and aircraft and spacecraft sensor platforms. The related purposes of the Census Cities Project are elucidated: (1) to assess the role of remote sensors on high altitude platforms for comparative study of urban areas; (2) to detect changes in selected U.S. urban areas between the 1970 census and the time of launching of an earth-orbiting sensor platform prior to next census; (3) to test the satellite sensor platform utility to monitor urban change and serve as a control for sensor image interpretation; (4) to design an information system for incorporating graphic sensor data with census-type data gathered by traditional techniques; (5) to identify and to design user-oriented end-products or information services; and (6) to ascertain what organizational capability would be needed to provide such services on a continuing basis. A need to develop not only a spatial data information system, but also a methodology for detecting and interpreting change is implied.

  4. Remote sensor digital image data analysis using the General Electric Image 100 analysis system (a study of analysis speed, cost, and performance)

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. It was found that the high speed man machine interaction capability is a distinct advantage of the image 100; however, the small size of the digital computer in the system is a definite limitation. The system can be highly useful in an analysis mode in which it complements a large general purpose computer. The image 100 was found to be extremely valuable in the analysis of aircraft MSS data where the spatial resolution begins to approach photographic quality and the analyst can exercise interpretation judgements and readily interact with the machine.

  5. Low-cost Volumetric Ultrasound by Augmentation of 2D Systems: Design and Prototype.

    PubMed

    Herickhoff, Carl D; Morgan, Matthew R; Broder, Joshua S; Dahl, Jeremy J

    2018-01-01

    Conventional two-dimensional (2D) ultrasound imaging is a powerful diagnostic tool in the hands of an experienced user, yet 2D ultrasound remains clinically underutilized and inherently incomplete, with output being very operator dependent. Volumetric ultrasound systems can more fully capture a three-dimensional (3D) region of interest, but current 3D systems require specialized transducers, are prohibitively expensive for many clinical departments, and do not register image orientation with respect to the patient; these systems are designed to provide improved workflow rather than operator independence. This work investigates whether it is possible to add volumetric 3D imaging capability to existing 2D ultrasound systems at minimal cost, providing a practical means of reducing operator dependence in ultrasound. In this paper, we present a low-cost method to make 2D ultrasound systems capable of quality volumetric image acquisition: we present the general system design and image acquisition method, including the use of a probe-mounted orientation sensor, a simple probe fixture prototype, and an offline volume reconstruction technique. We demonstrate initial results of the method, implemented using a Verasonics Vantage research scanner.

  6. Radiometric cross-calibration of EO-1 ALI with L7 ETM+ and Terra MODIS sensors using near-simultaneous desert observations

    USGS Publications Warehouse

    Chander, Gyanesh; Angal, Amit; Choi, Taeyoung; Xiong, Xiaoxiong

    2013-01-01

    The Earth Observing-1 (EO-1) satellite was launched on November 21, 2000, as part of a one-year technology demonstration mission. The mission was extended because of the value it continued to add to the scientific community. EO-1 has now been operational for more than a decade, providing both multispectral and hyperspectral measurements. As part of the EO-1 mission, the Advanced Land Imager (ALI) sensor demonstrates a potential technological direction for the next generation of Landsat sensors. To evaluate the ALI sensor capabilities as a precursor to the Operational Land Imager (OLI) onboard the Landsat Data Continuity Mission (LDCM, or Landsat 8 after launch), its measured top-of-atmosphere (TOA) reflectances were compared to the well-calibrated Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) sensors in the reflective solar bands (RSB). These three satellites operate in a near-polar, sun-synchronous orbit 705 km above the Earth's surface. EO-1 was designed to fly one minute behind L7 and approximately 30 minutes in front of Terra. In this configuration, all the three sensors can view near-identical ground targets with similar atmospheric, solar, and viewing conditions. However, because of the differences in the relative spectral response (RSR), the measured physical quantities can be significantly different while observing the same target. The cross-calibration of ALI with ETM+ and MODIS was performed using near-simultaneous surface observations based on image statistics from areas observed by these sensors over four desert sites (Libya 4, Mauritania 2, Arabia 1, and Sudan 1). The differences in the measured TOA reflectances due to RSR mismatches were compensated by using a spectral band adjustment factor (SBAF), which takes into account the spectral profile of the target and the RSR of each sensor. For this study, the spectral profile of the target comes from the near-simultaneous EO-1 Hyperion data over these sites. The results indicate that the TOA reflectance measurements for ALI agree with those of ETM+ and MODIS to within 5% after the application of SBAF.

  7. Early On-Orbit Performance of the Visible Infrared Imaging Radiometer Suite Onboard the Suomi National Polar-Orbiting Partnership (S-NPP) Satellite

    NASA Technical Reports Server (NTRS)

    Cao, Changyong; DeLuccia, Frank J.; Xiong, Xiaoxiong; Wolfe, Robert; Weng, Fuzhong

    2014-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) is one of the key environmental remote-sensing instruments onboard the Suomi National Polar-Orbiting Partnership spacecraft, which was successfully launched on October 28, 2011 from the Vandenberg Air Force Base, California. Following a series of spacecraft and sensor activation operations, the VIIRS nadir door was opened on November 21, 2011. The first VIIRS image acquired signifies a new generation of operational moderate resolution-imaging capabilities following the legacy of the advanced very high-resolution radiometer series on NOAA satellites and Terra and Aqua Moderate-Resolution Imaging Spectroradiometer for NASA's Earth Observing system. VIIRS provides significant enhancements to the operational environmental monitoring and numerical weather forecasting, with 22 imaging and radiometric bands covering wavelengths from 0.41 to 12.5 microns, providing the sensor data records for 23 environmental data records including aerosol, cloud properties, fire, albedo, snow and ice, vegetation, sea surface temperature, ocean color, and nigh-time visible-light-related applications. Preliminary results from the on-orbit verification in the postlaunch check-out and intensive calibration and validation have shown that VIIRS is performing well and producing high-quality images. This paper provides an overview of the onorbit performance of VIIRS, the calibration/validation (cal/val) activities and methodologies used. It presents an assessment of the sensor initial on-orbit calibration and performance based on the efforts from the VIIRS-SDR team. Known anomalies, issues, and future calibration efforts, including the long-term monitoring, and intercalibration are also discussed.

  8. Scintillator high-gain avalanche rushing photoconductor active-matrix flat panel imager: Zero-spatial frequency x-ray imaging properties of the solid-state SHARP sensor structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wronski, M.; Zhao, W.; Tanioka, K.

    Purpose: The authors are investigating the feasibility of a new type of solid-state x-ray imaging sensor with programmable avalanche gain: scintillator high-gain avalanche rushing photoconductor active matrix flat panel imager (SHARP-AMFPI). The purpose of the present work is to investigate the inherent x-ray detection properties of SHARP and demonstrate its wide dynamic range through programmable gain. Methods: A distributed resistive layer (DRL) was developed to maintain stable avalanche gain operation in a solid-state HARP. The signal and noise properties of the HARP-DRL for optical photon detection were investigated as a function of avalanche gain both theoretically and experimentally, and themore » results were compared with HARP tube (with electron beam readout) used in previous investigations of zero spatial frequency performance of SHARP. For this new investigation, a solid-state SHARP x-ray image sensor was formed by direct optical coupling of the HARP-DRL with a structured cesium iodide (CsI) scintillator. The x-ray sensitivity of this sensor was measured as a function of avalanche gain and the results were compared with the sensitivity of HARP-DRL measured optically. The dynamic range of HARP-DRL with variable avalanche gain was investigated for the entire exposure range encountered in radiography/fluoroscopy (R/F) applications. Results: The signal from HARP-DRL as a function of electric field showed stable avalanche gain, and the noise associated with the avalanche process agrees well with theory and previous measurements from a HARP tube. This result indicates that when coupled with CsI for x-ray detection, the additional noise associated with avalanche gain in HARP-DRL is negligible. The x-ray sensitivity measurements using the SHARP sensor produced identical avalanche gain dependence on electric field as the optical measurements with HARP-DRL. Adjusting the avalanche multiplication gain in HARP-DRL enabled a very wide dynamic range which encompassed all clinically relevant medical x-ray exposures. Conclusions: This work demonstrates that the HARP-DRL sensor enables the practical implementation of a SHARP solid-state x-ray sensor capable of quantum noise limited operation throughout the entire range of clinically relevant x-ray exposures. This is an important step toward the realization of a SHARP-AMFPI x-ray flat-panel imager.« less

  9. Smart wireless sensor for physiological monitoring.

    PubMed

    Tomasic, Ivan; Avbelj, Viktor; Trobec, Roman

    2015-01-01

    Presented is a wireless body sensor capable of measuring local potential differences on a body surface. By using on-sensor signal processing capabilities, and developed algorithms for off-line signal processing on a personal computing device, it is possible to record single channel ECG, heart rate, breathing rate, EMG, and when three sensors are applied, even the 12-lead ECG. The sensor is portable, unobtrusive, and suitable for both inpatient and outpatient monitoring. The paper presents the sensor's hardware and results of power consumption analysis. The sensor's capabilities of recording various physiological parameters are also presented and illustrated. The paper concludes with envisioned sensor's future developments and prospects.

  10. A COTS-MQS shipborne EO/IR imaging system

    NASA Astrophysics Data System (ADS)

    Hutchinson, Mark A.; Miller, John L.; Weaver, James

    2005-05-01

    The Sea Star SAFIRE is a commercially developed, off the shelf, military qualified system (COTS-MQS) consisting of a 640 by 480 InSb infrared imager, laser rangefinder and visible imager in a gyro-stabilized platform designed for shipborne applications. These applications include search and rescue, surveillance, fire control, fisheries patrol, harbor security, and own-vessel perimeter security and self protection. Particularly challenging considerations unique to shipborne systems include the demanding environment conditions, man-machine interfaces, and effects of atmospheric conditions on sensor performance. Shipborne environmental conditions requiring special attention include electromagnetic fields, as well as resistance to rain, ice and snow, shock, vibration, and salt. Features have been implemented to withstand exposure to water and high humidity; anti-ice/de-ice capability for exposure to snow and ice; wash/wipe of external windows; corrosion resistance for exposure to water and salt spray. A variety of system controller configurations provide man-machine interfaces suitable for operation on ships. EO sensor developments that address areas of haze penetration, glint, and scintillation will be presented.

  11. Charge shielding in the In-situ Storage Image Sensor for a vertex detector at the ILC

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Stefanov, K. D.; Bailey, D.; Banda, Y.; Buttar, C.; Cheplakov, A.; Cussans, D.; Damerell, C.; Devetak, E.; Fopma, J.; Foster, B.; Gao, R.; Gillman, A.; Goldstein, J.; Greenshaw, T.; Grimes, M.; Halsall, R.; Harder, K.; Hawes, B.; Hayrapetyan, K.; Heath, H.; Hillert, S.; Jackson, D.; Pinto Jayawardena, T.; Jeffery, B.; John, J.; Johnson, E.; Kundu, N.; Laing, A.; Lastovicka, T.; Lau, W.; Li, Y.; Lintern, A.; Lynch, C.; Mandry, S.; Martin, V.; Murray, P.; Nichols, A.; Nomerotski, A.; Page, R.; Parkes, C.; Perry, C.; O'Shea, V.; Sopczak, A.; Tabassam, H.; Thomas, S.; Tikkanen, T.; Velthuis, J.; Walsh, R.; Woolliscroft, T.; Worm, S.

    2009-08-01

    The Linear Collider Flavour Identification (LCFI) collaboration has successfully developed the first prototype of a novel particle detector, the In-situ Storage Image Sensor (ISIS). This device ideally suits the challenging requirements for the vertex detector at the future International Linear Collider (ILC), combining the charge storing capabilities of the Charge-Coupled Devices (CCD) with readout commonly used in CMOS imagers. The ISIS avoids the need for high-speed readout and offers low power operation combined with low noise, high immunity to electromagnetic interference and increased radiation hardness compared to typical CCDs. The ISIS is one of the most promising detector technologies for vertexing at the ILC. In this paper we describe the measurements on the charge-shielding properties of the p-well, which is used to protect the storage register from parasitic charge collection and is at the core of device's operation. We show that the p-well can suppress the parasitic charge collection by almost two orders of magnitude, satisfying the requirements for the application.

  12. Handheld real-time volumetric 3-D gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Haefner, Andrew; Barnowski, Ross; Luke, Paul; Amman, Mark; Vetter, Kai

    2017-06-01

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  13. Visual Image Sensor Organ Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.

    2014-01-01

    This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.

  14. Development of a distributed read-out imaging TES X-ray microcalorimeter

    NASA Astrophysics Data System (ADS)

    Trowell, S.; Holland, A. D.; Fraser, G. W.; Goldie, D.; Gu, E.

    2002-02-01

    We report on the development of a linear absorber detector for one-dimensional imaging spectroscopy, read-out by two Transition Edge Sensors (TESs). The TESs, based on a single layer of iridium, demonstrate stable and controllable superconducting-to-normal transitions in the region of 130 mK. Results from Monte Carlo simulations are presented indicating that the device configuration is capable of detecting photon positions to better than 200 μm, thereby meeting the resolution specification for missions such as XEUS of ~250 μm. .

  15. Waterway wide area tactical coverage and homing (WaterWATCH) program overview

    NASA Astrophysics Data System (ADS)

    Driggers, Gerald; Cleveland, Tammy; Araujo, Lisa; Spohr, Robert; Umansky, Mark

    2008-04-01

    The Congressional and Army sponsored WaterWATCH TM Program has developed and demonstrated a fully integrated shallow water port and facility monitoring system. It provides fully automated monitoring of domains above and below the surface of the water using primarily off-the-shelf sensors and software. The system is modular, open architecture and IP based, and elements can be mixed and matched to adapt to specific applications. The sensors integrated into the WaterWATCH TM system include cameras, radar, passive and active sonar, and various motion detectors. The sensors were chosen based on extensive requirements analyses and tradeoffs. Descriptions of the system and individual sensors are provided, along with data from modular and system level testing. Camera test results address capabilities and limitations associated with using "smart" image analysis software with stressing environmental issues such as bugs, darkness, rain and snow. Radar issues addressed include achieving range and resolution requirements. The passive sonar capability to provide near 100% true positives with zero false positives is demonstrated. Testing results are also presented to show that inexpensive active sonar can be effective against divers with or without SCUBA gear and that false alarms due to fish can be minimized. A simple operator interface has also been demonstrated.

  16. Adaptive pattern for autonomous UAV guidance

    NASA Astrophysics Data System (ADS)

    Sung, Chen-Ko; Segor, Florian

    2013-09-01

    The research done at the Fraunhofer IOSB in Karlsruhe within the AMFIS project is focusing on a mobile system to support rescue forces in accidents or disasters. The system consists of a ground control station which has the capability to communicate with a large number of heterogeneous sensors and sensor carriers and provides several open interfaces to allow easy integration of additional sensors into the system. Within this research we focus mainly on UAV such as VTOL (Vertical takeoff and Landing) systems because of their ease of use and their high maneuverability. To increase the positioning capability of the UAV, different onboard processing chains of image exploitation for real time detection of patterns on the ground and the interfacing technology for controlling the UAV from the payload during flight were examined. The earlier proposed static ground pattern was extended by an adaptive component which admits an additional visual communication channel to the aircraft. For this purpose different components were conceived to transfer additive information using changeable patterns on the ground. The adaptive ground pattern and their application suitability had to be tested under external influence. Beside the adaptive ground pattern, the onboard process chains and the adaptations to the demands of changing patterns are introduced in this paper. The tracking of the guiding points, the UAV navigation and the conversion of the guiding point positions from the images to real world co-ordinates in video sequences, as well as use limits and the possibilities of an adaptable pattern are examined.

  17. An evaluation of three-dimensional sensors for the extravehicular activity helper/retreiver

    NASA Technical Reports Server (NTRS)

    Magee, Michael

    1993-01-01

    The Extravehicular Activity Retriever/Helper (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, accurate sensing of the operational environment and objects in the environment will therefore be critical. The qualitative and quantitative results of empirical studies of three sensors that are capable of providing three-dimensional information to the EVAHR, but using completely different hardware approaches are documented. The first of these devices is a phase shift laser with an effective operating range (ambiguity interval) of approximately 15 meters. The second sensor is a laser triangulation system designed to operate at much closer range and to provide higher resolution images. The third sensor is a dual camera stereo imaging system from which range images can also be obtained. The remainder of the report characterizes the strengths and weaknesses of each of these systems relative to quality of data extracted and how different object characteristics affect sensor operation.

  18. A New Thiosemicarbazone-Based Fluorescence "Turn-on" Sensor for Zn(2+) Recognition with a Large Stokes Shift and its Application in Live Cell Imaging.

    PubMed

    Tang, Lijun; Huang, Zhenlong; Zheng, Zhuxuan; Zhong, Keli; Bian, Yanjiang

    2016-09-01

    Selective fluorescence turn on Zn(2+) sensor with long-wavelength emission and a large Stokes shift is highly desirable in Zn(2+) sensing area. We reported herein the synthesis and Zn(2+) recognition properties of a new thiosemicarbazone-based fluorescent sensor L. L displays high selectivity and sensitivity toward Zn(2+) over other metal ions in DMSO-H2O (1:1, v/v, HEPES 10 mM, pH = 7.4) solution with a long-wavelength emission at 572 nm and a large Stokes shift of 222 nm. Confocal fluorescence microscopy experiments demonstrate that L is cell-permeable and capable of monitoring intracellular Zn(2+). Graphical Abstract We report a new thiosemicarbazone-based fluorescent sensor (L) for selective recognition of Zn(2+) with a long wavelength emission and a large Stokes shift.

  19. Wireless Multimedia Sensor Networks: Current Trends and Future Directions

    PubMed Central

    Almalkawi, Islam T.; Zapata, Manel Guerrero; Al-Karaki, Jamal N.; Morillo-Pozo, Julian

    2010-01-01

    Wireless Multimedia Sensor Networks (WMSNs) have emerged and shifted the focus from the typical scalar wireless sensor networks to networks with multimedia devices that are capable to retrieve video, audio, images, as well as scalar sensor data. WMSNs are able to deliver multimedia content due to the availability of inexpensive CMOS cameras and microphones coupled with the significant progress in distributed signal processing and multimedia source coding techniques. In this paper, we outline the design challenges of WMSNs, give a comprehensive discussion of the proposed architectures, algorithms and protocols for the different layers of the communication protocol stack for WMSNs, and evaluate the existing WMSN hardware and testbeds. The paper will give the reader a clear view of the state of the art at all aspects of this research area, and shed the light on its main current challenges and future trends. We also hope it will foster discussions and new research ideas among its researchers. PMID:22163571

  20. Performance evaluation and modeling of a conformal filter (CF) based real-time standoff hazardous material detection sensor

    NASA Astrophysics Data System (ADS)

    Nelson, Matthew P.; Tazik, Shawna K.; Bangalore, Arjun S.; Treado, Patrick J.; Klem, Ethan; Temple, Dorota

    2017-05-01

    Hyperspectral imaging (HSI) systems can provide detection and identification of a variety of targets in the presence of complex backgrounds. However, current generation sensors are typically large, costly to field, do not usually operate in real time and have limited sensitivity and specificity. Despite these shortcomings, HSI-based intelligence has proven to be a valuable tool, thus resulting in increased demand for this type of technology. By moving the next generation of HSI technology into a more adaptive configuration, and a smaller and more cost effective form factor, HSI technologies can help maintain a competitive advantage for the U.S. armed forces as well as local, state and federal law enforcement agencies. Operating near the physical limits of HSI system capability is often necessary and very challenging, but is often enabled by rigorous modeling of detection performance. Specific performance envelopes we consistently strive to improve include: operating under low signal to background conditions; at higher and higher frame rates; and under less than ideal motion control scenarios. An adaptable, low cost, low footprint, standoff sensor architecture we have been maturing includes the use of conformal liquid crystal tunable filters (LCTFs). These Conformal Filters (CFs) are electro-optically tunable, multivariate HSI spectrometers that, when combined with Dual Polarization (DP) optics, produce optimized spectral passbands on demand, which can readily be reconfigured, to discriminate targets from complex backgrounds in real-time. With DARPA support, ChemImage Sensor Systems (CISS™) in collaboration with Research Triangle Institute (RTI) International are developing a novel, real-time, adaptable, compressive sensing short-wave infrared (SWIR) hyperspectral imaging technology called the Reconfigurable Conformal Imaging Sensor (RCIS) based on DP-CF technology. RCIS will address many shortcomings of current generation systems and offer improvements in operational agility and detection performance, while addressing sensor weight, form factor and cost needs. This paper discusses recent test and performance modeling results of a RCIS breadboard apparatus.

  1. Scalable sensor management for automated fusion and tactical reconnaissance

    NASA Astrophysics Data System (ADS)

    Walls, Thomas J.; Wilson, Michael L.; Partridge, Darin C.; Haws, Jonathan R.; Jensen, Mark D.; Johnson, Troy R.; Petersen, Brad D.; Sullivan, Stephanie W.

    2013-05-01

    The capabilities of tactical intelligence, surveillance, and reconnaissance (ISR) payloads are expanding from single sensor imagers to integrated systems-of-systems architectures. Increasingly, these systems-of-systems include multiple sensing modalities that can act as force multipliers for the intelligence analyst. Currently, the separate sensing modalities operate largely independent of one another, providing a selection of operating modes but not an integrated intelligence product. We describe here a Sensor Management System (SMS) designed to provide a small, compact processing unit capable of managing multiple collaborative sensor systems on-board an aircraft. Its purpose is to increase sensor cooperation and collaboration to achieve intelligent data collection and exploitation. The SMS architecture is designed to be largely sensor and data agnostic and provide flexible networked access for both data providers and data consumers. It supports pre-planned and ad-hoc missions, with provisions for on-demand tasking and updates from users connected via data links. Management of sensors and user agents takes place over standard network protocols such that any number and combination of sensors and user agents, either on the local network or connected via data link, can register with the SMS at any time during the mission. The SMS provides control over sensor data collection to handle logging and routing of data products to subscribing user agents. It also supports the addition of algorithmic data processing agents for feature/target extraction and provides for subsequent cueing from one sensor to another. The SMS architecture was designed to scale from a small UAV carrying a limited number of payloads to an aircraft carrying a large number of payloads. The SMS system is STANAG 4575 compliant as a removable memory module (RMM) and can act as a vehicle specific module (VSM) to provide STANAG 4586 compliance (level-3 interoperability) to a non-compliant sensor system. The SMS architecture will be described and results from several flight tests and simulations will be shown.

  2. Development of Meandering Winding Magnetometer (MWM (Register Trademark)) Eddy Current Sensors for the Health Monitoring, Modeling and Damage Detection of High Temperature Composite Materials

    NASA Technical Reports Server (NTRS)

    Russell, Richard; Washabaugh, Andy; Sheiretov, Yanko; Martin, Christopher; Goldfine, Neil

    2011-01-01

    The increased use of high-temperature composite materials in modern and next generation aircraft and spacecraft have led to the need for improved nondestructive evaluation and health monitoring techniques. Such technologies are desirable to improve quality control, damage detection, stress evaluation and temperature measurement capabilities. Novel eddy current sensors and sensor arrays, such as Meandering Winding Magnetometers (MWMs) have provided alternate or complimentary techniques to ultrasound and thermography for both nondestructive evaluation (NDE) and structural health monitoring (SHM). This includes imaging of composite material quality, damage detection and .the monitoring of fiber temperatures and multidirectional stresses. Historically, implementation of MWM technology for the inspection of the Space Shuttle Orbiter Reinforced Carbon-Carbon Composite (RCC) leading edge panels was developed by JENTEK Sensors and was subsequently transitioned by NASA as an operational pre and post flight in-situ inspection at the Kennedy Space Center. A manual scanner, which conformed'automatically to the curvature of the RCC panels was developed and used as a secondary technique if a defect was found during an infrared thermography screening, During a recent proof of concept study on composite overwrapped pressure vessels (COPV's), three different MWM sensors were tested at three orientations to demonstrate the ability of the technology to measure stresses at various fiber orientations and depths. These results showed excellent correlation with actual surface strain gage measurements. Recent advancements of this technology have been made applying MWM sensor technology for scanning COPVs for mechanical damage. This presentation will outline the recent advance in the MWM.technology and the development of MWM techniques for NDE and SHM of carbon wraped composite overwrapped pressure vessels (COPVs) including the measurement of internal stresses via a surface mounted sensor array. In addition, this paper will outline recent efforts to produce sensors capable of making real-time measurements at temperatures up to 850 C, and discuss previous results demonstrating capability to monitor carbon fiber temperature changes within a composite material.

  3. The Goes-R Geostationary Lightning Mapper (GLM): Algorithm and Instrument Status

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas

    2010-01-01

    The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved capability for the Advanced Baseline Imager (ABI). The Geostationary Lighting Mapper (GLM) will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development (a prototype and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. A joint field campaign with Brazilian researchers in 2010-2011 will produce concurrent observations from a VHF lightning mapping array, Meteosat multi-band imagery, Tropical Rainfall Measuring Mission (TRMM) Lightning Imaging Sensor (LIS) overpasses, and related ground and in-situ lightning and meteorological measurements in the vicinity of Sao Paulo. These data will provide a new comprehensive proxy data set for algorithm and application development.

  4. Fiber-optic fringe projection with crosstalk reduction by adaptive pattern masking

    NASA Astrophysics Data System (ADS)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2017-02-01

    To enable in-process inspection of industrial manufacturing processes, measuring devices need to fulfill time and space constraints, while also being robust to environmental conditions, such as high temperatures and electromagnetic fields. A new fringe projection profilometry system is being developed, which is capable of performing the inspection of filigree tool geometries, e.g. gearing elements with tip radii of 0.2 mm, inside forming machines of the sheet-bulk metal forming process. Compact gradient-index rod lenses with a diameter of 2 mm allow for a compact design of the sensor head, which is connected to a base unit via flexible high-resolution image fibers with a diameter of 1.7 mm. The base unit houses a flexible DMD based LED projector optimized for fiber coupling and a CMOS camera sensor. The system is capable of capturing up to 150 gray-scale patterns per second as well as high dynamic range images from multiple exposures. Owing to fiber crosstalk and light leakage in the image fiber, signal quality suffers especially when capturing 3-D data of technical surfaces with highly varying reflectance or surface angles. An algorithm is presented, which adaptively masks parts of the pattern to reduce these effects via multiple exposures. The masks for valid surface areas are automatically defined according to different parameters from an initial capture, such as intensity and surface gradient. In a second step, the masks are re-projected to projector coordinates using the mathematical model of the system. This approach is capable of reducing both inter-pixel crosstalk and inter-object reflections on concave objects while maintaining measurement durations of less than 5 s.

  5. Carbohydrate Recognition by Boronolectins, Small Molecules, and Lectins

    PubMed Central

    Jin, Shan; Cheng, Yunfeng; Reid, Suazette; Li, Minyong; Wang, Binghe

    2009-01-01

    Carbohydrates are known to mediate a large number of biological and pathological events. Small and macromolecules capable of carbohydrate recognition have great potentials as research tools, diagnostics, vectors for targeted delivery of therapeutic and imaging agents, and therapeutic agents. However, this potential is far from being realized. One key issue is the difficulty in the development of “binders” capable of specific recognition of carbohydrates of biological relevance. This review discusses systematically the general approaches that are available in developing carbohydrate sensors and “binders/receptors,” and their applications. The focus is on discoveries during the last five years. PMID:19291708

  6. Time-lapse contact microscopy of cell cultures based on non-coherent illumination

    NASA Astrophysics Data System (ADS)

    Gabriel, Marion; Balle, Dorothée; Bigault, Stéphanie; Pornin, Cyrille; Gétin, Stéphane; Perraut, François; Block, Marc R.; Chatelain, François; Picollet-D'Hahan, Nathalie; Gidrol, Xavier; Haguet, Vincent

    2015-10-01

    Video microscopy offers outstanding capabilities to investigate the dynamics of biological and pathological mechanisms in optimal culture conditions. Contact imaging is one of the simplest imaging architectures to digitally record images of cells due to the absence of any objective between the sample and the image sensor. However, in the framework of in-line holography, other optical components, e.g., an optical filter or a pinhole, are placed underneath the light source in order to illuminate the cells with a coherent or quasi-coherent incident light. In this study, we demonstrate that contact imaging with an incident light of both limited temporal and spatial coherences can be achieved with sufficiently high quality for most applications in cell biology, including monitoring of cell sedimentation, rolling, adhesion, spreading, proliferation, motility, death and detachment. Patterns of cells were recorded at various distances between 0 and 1000 μm from the pixel array of the image sensors. Cells in suspension, just deposited or at mitosis focalise light into photonic nanojets which can be visualised by contact imaging. Light refraction by cells significantly varies during the adhesion process, the cell cycle and among the cell population in connection with every modification in the tridimensional morphology of a cell.

  7. Advanced Multipurpose Rendezvous Tracking System Study

    NASA Technical Reports Server (NTRS)

    Laurie, R. J.; Sterzer, F.

    1982-01-01

    Rendezvous and docking (R&D) sensors needed to support Earth orbital operations of vehicles were investigated to determine the form they should take. An R&D sensor must enable an interceptor vehicle to determine both the relative position and the relative attitude of a target vehicle. Relative position determination is fairly straightforward and places few constraints on the sensor. Relative attitude determination, however, is more difficult. The attitude is calculated based on relative position measurements of several reflectors placed in a known arrangement on the target vehicle. The constraints imposed on the sensor by the attitude determination method are severe. Narrow beamwidth, wide field of view (fov), high range accuracy, and fast random scan capability are all required to determine attitude by this method. A consideration of these constraints as well as others imposed by expected operating conditions and the available technology led to the conclusion that the sensor should be a cw optical radar employing a semiconductor laser transmitter and an image dissector receiver.

  8. A Multi-Frequency Polarimetric SAR Sensors Analysis over the UNESCO Archaeological Site of Djebel Barkal (Sudan)

    NASA Astrophysics Data System (ADS)

    Patruno, Jolanda; Dore, Nicole; Pottier, Eric; Crespi, Mattia

    2013-08-01

    Differences in vegetation growth and in soil moisture content generate ground anomalies which can be linked to subsurface anthropic structures. Such evidences have been studied by means of aerial photographs and of historical II World War acquisitions first, and of very high spatial resolution of optical satellites later. This work aims to exploit the technique of SAR Polarimetry for the detection of surface and subsurface archaeological structures, comparing ALOS P ALSAR L-band (central frequency 1.27 GHz), with RADARSAT-2 C-band sensor (central frequency 5.405 GHz). The great potential of the two polarimetric sensors with different frequency for the detection of archaeological remains has been demonstrated thanks to the sand penetration capability of both C-band and L- band sensors. The choice to analyze radar sensors is based on their 24-hour observations, independent from Sun illumination and meteorological conditions and on the electromagnetic properties of the target they could provide, information not derivable from optical images.

  9. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  10. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  11. Hyperspectral Imaging of Forest Resources: The Malaysian Experience

    NASA Astrophysics Data System (ADS)

    Mohd Hasmadi, I.; Kamaruzaman, J.

    2008-08-01

    Remote sensing using satellite and aircraft images are well established technology. Remote sensing application of hyperspectral imaging, however, is relatively new to Malaysian forestry. Through a wide range of wavelengths hyperspectral data are precisely capable to capture narrow bands of spectra. Airborne sensors typically offer greatly enhanced spatial and spectral resolution over their satellite counterparts, and able to control experimental design closely during image acquisition. The first study using hyperspectral imaging for forest inventory in Malaysia were conducted by Professor Hj. Kamaruzaman from the Faculty of Forestry, Universiti Putra Malaysia in 2002 using the AISA sensor manufactured by Specim Ltd, Finland. The main objective has been to develop methods that are directly suited for practical tropical forestry application at the high level of accuracy. Forest inventory and tree classification including development of single spectral signatures have been the most important interest at the current practices. Experiences from the studies showed that retrieval of timber volume and tree discrimination using this system is well and some or rather is better than other remote sensing methods. This article reviews the research and application of airborne hyperspectral remote sensing for forest survey and assessment in Malaysia.

  12. Visual Sensing for Urban Flood Monitoring

    PubMed Central

    Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han

    2015-01-01

    With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201

  13. Representing Geospatial Environment Observation Capability Information: A Case Study of Managing Flood Monitoring Sensors in the Jinsha River Basin

    PubMed Central

    Hu, Chuli; Guan, Qingfeng; Li, Jie; Wang, Ke; Chen, Nengcheng

    2016-01-01

    Sensor inquirers cannot understand comprehensive or accurate observation capability information because current observation capability modeling does not consider the union of multiple sensors nor the effect of geospatial environmental features on the observation capability of sensors. These limitations result in a failure to discover credible sensors or plan for their collaboration for environmental monitoring. The Geospatial Environmental Observation Capability (GEOC) is proposed in this study and can be used as an information basis for the reliable discovery and collaborative planning of multiple environmental sensors. A field-based GEOC (GEOCF) information representation model is built. Quintuple GEOCF feature components and two GEOCF operations are formulated based on the geospatial field conceptual framework. The proposed GEOCF markup language is used to formalize the proposed GEOCF. A prototype system called GEOCapabilityManager is developed, and a case study is conducted for flood observation in the lower reaches of the Jinsha River Basin. The applicability of the GEOCF is verified through the reliable discovery of flood monitoring sensors and planning for the collaboration of these sensors. PMID:27999247

  14. Representing Geospatial Environment Observation Capability Information: A Case Study of Managing Flood Monitoring Sensors in the Jinsha River Basin.

    PubMed

    Hu, Chuli; Guan, Qingfeng; Li, Jie; Wang, Ke; Chen, Nengcheng

    2016-12-16

    Sensor inquirers cannot understand comprehensive or accurate observation capability information because current observation capability modeling does not consider the union of multiple sensors nor the effect of geospatial environmental features on the observation capability of sensors. These limitations result in a failure to discover credible sensors or plan for their collaboration for environmental monitoring. The Geospatial Environmental Observation Capability (GEOC) is proposed in this study and can be used as an information basis for the reliable discovery and collaborative planning of multiple environmental sensors. A field-based GEOC (GEOCF) information representation model is built. Quintuple GEOCF feature components and two GEOCF operations are formulated based on the geospatial field conceptual framework. The proposed GEOCF markup language is used to formalize the proposed GEOCF. A prototype system called GEOCapabilityManager is developed, and a case study is conducted for flood observation in the lower reaches of the Jinsha River Basin. The applicability of the GEOCF is verified through the reliable discovery of flood monitoring sensors and planning for the collaboration of these sensors.

  15. Onboard TDI stage estimation and calibration using SNR analysis

    NASA Astrophysics Data System (ADS)

    Haghshenas, Javad

    2017-09-01

    Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is performed based on the noise analysis of TDI sensors for very high GSDs and low light level missions. It is well demonstrated that the CCD TDI mode of operation provides increased photosensitivity relative to a linear CCD array, without the sacrifice of spatial resolution. However, for satellite imaging, in order to utilize the advantages which the TDI mode of operation offers, attention should be given to the parameters which affect the image quality of TDI sensors such as jitters, vibrations, noises and etc. A predefined TDI stages may not properly satisfy image quality requirement of the satellite camera. Furthermore, in order to use the whole dynamic range of the sensor, imager must be capable to set the TDI stages in every shots based on the affecting parameters. This paper deals with the optimal estimation and setting the stages based on tradeoffs among MTF, noises and SNR. On-board SNR estimation is simulated using the atmosphere analysis based on the MODTRAN algorithm in PcModWin software. According to the noises models, we have proposed a formulation to estimate TDI stages in such a way to satisfy the system SNR requirement. On the other hand, MTF requirement must be satisfy in the same manner. A proper combination of both parameters will guaranty the full dynamic range usage along with the high SNR and image quality.

  16. Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor

    PubMed Central

    Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi

    2016-01-01

    Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time. PMID:27898002

  17. Design of a Solar Tracking System Using the Brightest Region in the Sky Image Sensor.

    PubMed

    Wei, Ching-Chuan; Song, Yu-Chang; Chang, Chia-Chi; Lin, Chuan-Bi

    2016-11-25

    Solar energy is certainly an energy source worth exploring and utilizing because of the environmental protection it offers. However, the conversion efficiency of solar energy is still low. If the photovoltaic panel perpendicularly tracks the sun, the solar energy conversion efficiency will be improved. In this article, we propose an innovative method to track the sun using an image sensor. In our method, it is logical to assume the points of the brightest region in the sky image representing the location of the sun. Then, the center of the brightest region is assumed to be the solar-center, and is mathematically calculated using an embedded processor (Raspberry Pi). Finally, the location information on the sun center is sent to the embedded processor to control two servo motors that are capable of moving both horizontally and vertically to track the sun. In comparison with the existing sun tracking methods using image sensors, such as the Hough transform method, our method based on the brightest region in the sky image remains accurate under conditions such as a sunny day and building shelter. The practical sun tracking system using our method was implemented and tested. The results reveal that the system successfully captured the real sun center in most weather conditions, and the servo motor system was able to direct the photovoltaic panel perpendicularly to the sun center. In addition, our system can be easily and practically integrated, and can operate in real-time.

  18. Differentiation of benign and malignant breast lesions by mechanical imaging

    PubMed Central

    Kearney, Thomas; Pollak, Stanley B.; Rohatgi, Chand; Sarvazyan, Noune; Airapetian, Suren; Browning, Stephanie; Sarvazyan, Armen

    2009-01-01

    Mechanical imaging yields tissue elasticity map and provides quantitative characterization of a detected pathology. The changes in the surface stress patterns as a function of applied load provide information about the elastic composition and geometry of the underlying tissue structures. The objective of this study is the clinical evaluation of breast mechanical imager for breast lesion characterization and differentiation between benign and malignant lesions. The breast mechanical imager includes a probe with pressure sensor array, an electronic unit providing data acquisition from the pressure sensors and communication with a touch-screen laptop computer. We have developed an examination procedure and algorithms to provide assessment of breast lesion features such as hardness related parameters, mobility, and shape. A statistical Bayesian classifier was constructed to distinguish between benign and malignant lesions by utilizing all the listed features as the input. Clinical results for 179 cases, collected at four different clinical sites, have demonstrated that the breast mechanical imager provides a reliable image formation of breast tissue abnormalities and calculation of lesion features. Malignant breast lesions (histologically confirmed) demonstrated increased hardness and strain hardening as well as decreased mobility and longer boundary length in comparison with benign lesions. Statistical analysis of differentiation capability for 147 benign and 32 malignant lesions revealed an average sensitivity of 91.4% and specificity of 86.8% with a standard deviation of ±6.1%. The area under the receiver operating characteristic curve characterizing benign and malignant lesion discrimination is 86.1% with the confidence interval ranging from 80.3 to 90.9%, with a significance level of P = 0.0001 (area = 50%). The multisite clinical study demonstrated the capability of mechanical imaging for characterization and differentiation of benign and malignant breast lesions. We hypothesize that the breast mechanical imager has the potential to be used as a cost effective device for cancer diagnostics that could reduce the benign biopsy rate, serve as an adjunct to mammography and to be utilized as a screening device for breast cancer detection. PMID:19306059

  19. The Adaptive Optics Lucky Imager: Diffraction limited imaging at visible wavelengths with large ground-based telescopes

    NASA Astrophysics Data System (ADS)

    Crass, Jonathan; Mackay, Craig; King, David; Rebolo-López, Rafael; Labadie, Lucas; Puga, Marta; Oscoz, Alejandro; González Escalera, Victor; Pérez Garrido, Antonio; López, Roberto; Pérez-Prieto, Jorge; Rodríguez-Ramos, Luis; Velasco, Sergio; Villó, Isidro

    2015-01-01

    One of the continuing challenges facing astronomers today is the need to obtain ever higher resolution images of the sky. Whether studying nearby crowded fields or distant objects, with increased resolution comes the ability to probe systems in more detail and advance our understanding of the Universe. Obtaining these high-resolution images at visible wavelengths however has previously been limited to the Hubble Space Telescope (HST) due to atmospheric effects limiting the spatial resolution of ground-based telescopes to a fraction of their potential. With HST now having a finite lifespan, it is prudent to investigate other techniques capable of providing these kind of observations from the ground. Maintaining this capability is one of the goals of the Adaptive Optics Lucky Imager (AOLI).Achieving the highest resolutions requires the largest telescope apertures, however, this comes at the cost of increased atmospheric distortion. To overcome these atmospheric effects, there are two main techniques employed today: adaptive optics (AO) and lucky imaging. These techniques individually are unable to provide diffraction limited imaging in the visible on large ground-based telescopes; AO currently only works at infrared wavelengths while lucky imaging reduces in effectiveness on telescopes greater than 2.5 metres in diameter. The limitations of both techniques can be overcome by combing them together to provide diffraction limited imaging at visible wavelengths on the ground.The Adaptive Optics Lucky Imager is being developed as a European collaboration and combines AO and lucky imaging in a dedicated instrument for the first time. Initially for use on the 4.2 metre William Herschel Telescope, AOLI uses a low-order adaptive optics system to reduce the effects of atmospheric turbulence before imaging with a lucky imaging based science detector. The AO system employs a novel type of wavefront sensor, the non-linear Curvature Wavefront Sensor (nlCWFS) which provides significant sky-coverage using natural guide-stars alone.Here we present an overview of the instrument design, results from the first on-sky and laboratory testing and on-going development work of the instrument and its adaptive optics system.

  20. Engineering Novel Detectors and Sensors for MRI

    PubMed Central

    Qian, Chunqi; Zabow, Gary; Koretsky, Alan

    2013-01-01

    Increasing detection sensitivity and image contrast have always been major topics of research in MRI. In this perspective, we summarize two engineering approaches to make detectors and sensors that have potential to extend the capability of MRI. The first approach is to integrate miniaturized detectors with a wireless powered parametric amplifier to enhance the detection sensitivity of remotely coupled detectors. The second approach is to microfabricate contrast agents with encoded multispectral frequency shifts, whose properties can be specified and fine-tuned by geometry. These two complementary approaches will benefit from the rapid development in nanotechnology and microfabrication which should enable new opportunities for MRI. PMID:23245489

  1. Micro-Hall devices for magnetic, electric and photo-detection

    NASA Astrophysics Data System (ADS)

    Gilbertson, A.; Sadeghi, H.; Panchal, V.; Kazakova, O.; Lambert, C. J.; Solin, S. A.; Cohen, L. F.

    Multifunctional mesoscopic sensors capable of detecting local magnetic (B) , electric (E) , and optical fields can greatly facilitate image capture in nano-arrays that address a multitude of disciplines. The use of micro-Hall devices as B-field sensors and, more recently as E-field sensors is well established. Here we report the real-space voltage response of InSb/AlInSb micro-Hall devices to not only local E-, and B-fields but also to photo-excitation using scanning probe microscopy. We show that the ultrafast generation of localised photocarriers results in conductance perturbations analogous to those produced by local E-fields. Our experimental results are in good agreement with tight-binding transport calculations in the diffusive regime. At room temperature, samples exhibit a magnetic sensitivity of >500 nT/ √Hz, an optical noise equivalent power of >20 pW/ √Hz (λ = 635 nm) comparable to commercial photoconductive detectors, and charge sensitivity of >0.04 e/ √Hz comparable to that of single electron transistors. Work done while on sabbatical from Washington University. Co-founder of PixelEXX, a start-up whose focus is imaging nano-arrays.

  2. Crosscutting Airborne Remote Sensing Technologies for Oil and Gas and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Aubrey, A. D.; Frankenberg, C.; Green, R. O.; Eastwood, M. L.; Thompson, D. R.; Thorpe, A. K.

    2015-01-01

    Airborne imaging spectroscopy has evolved dramatically since the 1980s as a robust remote sensing technique used to generate 2-dimensional maps of surface properties over large spatial areas. Traditional applications for passive airborne imaging spectroscopy include interrogation of surface composition, such as mapping of vegetation diversity and surface geological composition. Two recent applications are particularly relevant to the needs of both the oil and gas as well as government sectors: quantification of surficial hydrocarbon thickness in aquatic environments and mapping atmospheric greenhouse gas components. These techniques provide valuable capabilities for petroleum seepage in addition to detection and quantification of fugitive emissions. New empirical data that provides insight into the source strength of anthropogenic methane will be reviewed, with particular emphasis on the evolving constraints enabled by new methane remote sensing techniques. Contemporary studies attribute high-strength point sources as significantly contributing to the national methane inventory and underscore the need for high performance remote sensing technologies that provide quantitative leak detection. Imaging sensors that map spatial distributions of methane anomalies provide effective techniques to detect, localize, and quantify fugitive leaks. Airborne remote sensing instruments provide the unique combination of high spatial resolution (<1 m) and large coverage required to directly attribute methane emissions to individual emission sources. This capability cannot currently be achieved using spaceborne sensors. In this study, results from recent NASA remote sensing field experiments focused on point-source leak detection, will be highlighted. This includes existing quantitative capabilities for oil and methane using state-of-the-art airborne remote sensing instruments. While these capabilities are of interest to NASA for assessment of environmental impact and global climate change, industry similarly seeks to detect and localize leaks of both oil and methane across operating fields. In some cases, higher sensitivities desired for upstream and downstream applications can only be provided by new airborne remote sensing instruments tailored specifically for a given application. There exists a unique opportunity for alignment of efforts between commercial and government sectors to advance the next generation of instruments to provide more sensitive leak detection capabilities, including those for quantitative source strength determination.

  3. The Goodrich 3rd generation DB-110 system: operational on tactical and unmanned aircraft

    NASA Astrophysics Data System (ADS)

    Iyengar, Mrinal; Lange, Davis

    2006-05-01

    Goodrich's DB-110 Reconnaissance Airborne Pod for TORnado (RAPTOR) and Data Link Ground Station (DLGS) have been used operationally for several years by the Royal Air Force (RAF). A variant of the RAPTOR DB-110 Sensor System is currently being used by the Japan Maritime Self Defense Force (JMSDF). Recently, the DB-110 system was flown on the Predator B Unmanned Aerial Vehicle (UAV), demonstrating the DB-110 system's utility on unmanned reconnaissance aircraft. The DB-110 is a dual-band EO and IR imaging capability for long, medium, and short standoff ranges, including oblique and over-flight imaging, in a single sensor package. The DB-110 system has also proven performance for real-time high bandwidth data link imagery transmission. Goodrich has leveraged this operational experience in building a 3rd Generation DB-110 system including new Reconnaissance Airborne Pod and Ground System, to be first used by the Polish Air Force. This 3rd Generation system maintains all the capability of the current 2nd Generation DB-110 system and adds several new features. The 3rd Generation system upgrades include an increase in resolution via new focal planes, addition of a third ("super-wide") field of view, and new avionics. This paper summarizes the Goodrich DB-110 3rd Generation System in terms of its basic design and capabilities. Recent demonstration of the DB-110 on the Predator B UAV is overviewed including sample imagery.

  4. MSTI-3 sensor package optical design

    NASA Astrophysics Data System (ADS)

    Horton, Richard F.; Baker, William G.; Griggs, Michael; Nguyen, Van; Baker, H. Vernon

    1995-06-01

    The MSTI-3 sensor package is a three band imaging telescope for military and dual use sensing missions. The MSTI-3 mission is one of the Air Force Phillips Laboratory's Pegasus launched space missions, a third in the series of state-of-the-art lightweight sensors on low cost satellites. The satellite is planned for launch into a 425 Km orbit in late 1995. The MSTI- 3 satellite is configured with a down looking two axis gimbal and gimbal mirror. The gimbal mirror is an approximately 13 cm by 29 cm mirror which allows a field of regard approximately 100 degrees by 180 degrees. The optical train uses several novel optical features to allow for compactness and light weight. A 105 mm Ritchey Chretien Cassegrain imaging system with a CaF(subscript 2) dome astigmatism corrector is followed by a CaF(subscript 2) beamsplitter cube assembly at the systems first focus. The dichroic beamsplitter cube assembly separates the light into a visible and two IR channels of approximately 2.5 to 3.3, (SWIR), and 3.5 to 4.5, (MWIR), micron wavelength bands. The two IR imaging channels each consist of unity power re-imaging lens cluster, a cooled seven position filter wheel, a cooled Lyot stop and an Amber 256 X 256 InSb array camera. The visible channel uses a unity power re- imaging system prior to a linear variable filter with a Sony CCD array, which allows for a multispectral imaging capability in the 0.5 to 0.8 micron region. The telescope field of view is 1.4 degrees square.

  5. Extreme-Environment Silicon-Carbide (SiC) Wireless Sensor Suite

    NASA Technical Reports Server (NTRS)

    Yang, Jie

    2015-01-01

    Phase II objectives: Develop an integrated silicon-carbide wireless sensor suite capable of in situ measurements of critical characteristics of NTP engine; Compose silicon-carbide wireless sensor suite of: Extreme-environment sensors center, Dedicated high-temperature (450 deg C) silicon-carbide electronics that provide power and signal conditioning capabilities as well as radio frequency modulation and wireless data transmission capabilities center, An onboard energy harvesting system as a power source.

  6. iCATSI: multi-pixel imaging differential spectroradiometer for standoff detection and quantification of chemical threats

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Lavoie, Hugo; Bouffard, François; Thériault, Jean-Marc; Vallieres, Christian; Roy, Claude; Dubé, Denis

    2011-11-01

    Homeland security and first responders are often faced with safety situations involving the identification of unknown volatile chemicals. Examples include industrial fires, chemical warfare, industrial leak, etc. The Improved Compact ATmospheric Sounding Interferometer (iCATSI) sensor has been developed to investigate the standoff detection and identification of toxic industrial chemicals (TICs), chemical warfare agents (CWA) and other chemicals. iCATSI is a combination of the CATSI instrument, a standoff differential FTIR optimised for the characterization of chemicals and the MR-i, the hyperspectral imaging spectroradiometer of ABB Bomem based on the proven MR spectroradiometers. The instrument is equipped with a dual-input telescope to perform optical background subtraction. The resulting signal is the difference between the spectral radiance entering each input port. With that method, the signal from the background is automatically removed from the signal of the target of interest. The iCATSI sensor is able to detect, spectrally resolve and identify 5 meters plumes up to 5 km range. The instrument is capable of sensing in the VLWIR (cut-off near 14 μm) to support research related to standoff chemical detection. In one of its configurations, iCATSI produces three 24 × 16 spectral images per second from 5.5 to 14 μm at a spectral resolution of 16 cm-1. In another configuration, iCATSI produces from two to four spectral images per second of 256 × 256 pixels from 8 to 13 μm with the same spectral resolution. Overview of the capabilities of the instrument and results from tests and field trials will be presented.

  7. Image analysis algorithms for the advanced radiographic capability (ARC) grating tilt sensor at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Roberts, Randy S.; Bliss, Erlan S.; Rushford, Michael C.; Halpin, John M.; Awwal, Abdul A. S.; Leach, Richard R.

    2014-09-01

    The Advance Radiographic Capability (ARC) at the National Ignition Facility (NIF) is a laser system designed to produce a sequence of short pulses used to backlight imploding fuel capsules. Laser pulses from a short-pulse oscillator are dispersed in wavelength into long, low-power pulses, injected in the NIF main laser for amplification, and then compressed into high-power pulses before being directed into the NIF target chamber. In the target chamber, the laser pulses hit targets which produce x-rays used to backlight imploding fuel capsules. Compression of the ARC laser pulses is accomplished with a set of precision-surveyed optical gratings mounted inside of vacuum vessels. The tilt of each grating is monitored by a measurement system consisting of a laser diode, camera and crosshair, all mounted in a pedestal outside of the vacuum vessel, and a mirror mounted on the back of a grating inside the vacuum vessel. The crosshair is mounted in front of the camera, and a diffraction pattern is formed when illuminated with the laser diode beam reflected from the mirror. This diffraction pattern contains information related to relative movements between the grating and the pedestal. Image analysis algorithms have been developed to determine the relative movements between the gratings and pedestal. In the paper we elaborate on features in the diffraction pattern, and describe the image analysis algorithms used to monitor grating tilt changes. Experimental results are provided which indicate the high degree of sensitivity provided by the tilt sensor and image analysis algorithms.

  8. Real-time Graphics Processing Unit Based Fourier Domain Optical Coherence Tomography and Surgical Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Kang

    2011-12-01

    In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.

  9. Flexible mobile robot system for smart optical pipe inspection

    NASA Astrophysics Data System (ADS)

    Kampfer, Wolfram; Bartzke, Ralf; Ziehl, Wolfgang

    1998-03-01

    Damages of pipes can be inspected and graded by TV technology available on the market. Remotely controlled vehicles carry a TV-camera through pipes. Thus, depending on the experience and the capability of the operator, diagnosis failures can not be avoided. The classification of damages requires the knowledge of the exact geometrical dimensions of the damages such as width and depth of cracks, fractures and defect connections. Within the framework of a joint R&D project a sensor based pipe inspection system named RODIAS has been developed with two partners from industry and research institute. It consists of a remotely controlled mobile robot which carries intelligent sensors for on-line sewerage inspection purpose. The sensor is based on a 3D-optical sensor and a laser distance sensor. The laser distance sensor is integrated in the optical system of the camera and can measure the distance between camera and object. The angle of view can be determined from the position of the pan and tilt unit. With coordinate transformations it is possible to calculate the spatial coordinates for every point of the video image. So the geometry of an object can be described exactly. The company Optimess has developed TriScan32, a special software for pipe condition classification. The user can start complex measurements of profiles, pipe displacements or crack widths simply by pressing a push-button. The measuring results are stored together with other data like verbal damage descriptions and digitized images in a data base.

  10. Electric fish as natural models for technical sensor systems

    NASA Astrophysics Data System (ADS)

    von der Emde, Gerhard; Bousack, Herbert; Huck, Christina; Mayekar, Kavita; Pabst, Michael; Zhang, Yi

    2009-05-01

    Instead of vision, many animals use alternative senses for object detection. Weakly electric fish employ "active electrolocation", during which they discharge an electric organ emitting electrical current pulses (electric organ discharges, EOD). Local EODs are sensed by electroreceptors in the fish's skin, which respond to changes of the signal caused by nearby objects. Fish can gain information about attributes of an object, such as size, shape, distance, and complex impedance. When close to the fish, each object projects an 'electric image' onto the fish's skin. In order to get information about an object, the fish has to analyze the object's electric image by sampling its voltage distribution with the electroreceptors. We now know a great deal about the mechanisms the fish use to gain information about objects in their environment. Inspired by the remarkable capabilities of weakly electric fish in detecting and recognizing objects with their electric sense, we are designing technical sensor systems that can solve similar sensing problems. We applied the principles of active electrolocation to devices that produce electrical current pulses in water and simultaneously sense local current densities. Depending on the specific task, sensors can be designed which detect an object, localize it in space, determine its distance, and measure certain object properties such as material properties, thickness, or material faults. We present first experiments and FEM simulations on the optimal sensor arrangement regarding the sensor requirements e. g. localization of objects or distance measurements. Different methods of the sensor read-out and signal processing are compared.

  11. Simulating optoelectronic systems for remote sensing with SENSOR

    NASA Astrophysics Data System (ADS)

    Boerner, Anko

    2003-04-01

    The consistent end-to-end simulation of airborne and spaceborne remote sensing systems is an important task and sometimes the only way for the adaptation and optimization of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software ENvironment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. It allows the simulation of a wide range of optoelectronic systems for remote sensing. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. Part three consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimization requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and examples of its use are given. The verification of SENSOR is demonstrated.

  12. New radiological material detection technologies for nuclear forensics: Remote optical imaging and graphene-based sensors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Richard Karl; Martin, Jeffrey B.; Wiemann, Dora K.

    We developed new detector technologies to identify the presence of radioactive materials for nuclear forensics applications. First, we investigated an optical radiation detection technique based on imaging nitrogen fluorescence excited by ionizing radiation. We demonstrated optical detection in air under indoor and outdoor conditions for alpha particles and gamma radiation at distances up to 75 meters. We also contributed to the development of next generation systems and concepts that could enable remote detection at distances greater than 1 km, and originated a concept that could enable daytime operation of the technique. A second area of research was the development ofmore » room-temperature graphene-based sensors for radiation detection and measurement. In this project, we observed tunable optical and charged particle detection, and developed improved devices. With further development, the advancements described in this report could enable new capabilities for nuclear forensics applications.« less

  13. Effect of dose and size on defect engineering in carbon cluster implanted silicon wafers

    NASA Astrophysics Data System (ADS)

    Okuyama, Ryosuke; Masada, Ayumi; Shigematsu, Satoshi; Kadono, Takeshi; Hirose, Ryo; Koga, Yoshihiro; Okuda, Hidehiko; Kurita, Kazunari

    2018-01-01

    Carbon-cluster-ion-implanted defects were investigated by high-resolution cross-sectional transmission electron microscopy toward achieving high-performance CMOS image sensors. We revealed that implantation damage formation in the silicon wafer bulk significantly differs between carbon-cluster and monomer ions after implantation. After epitaxial growth, small and large defects were observed in the implanted region of carbon clusters. The electron diffraction pattern of both small and large defects exhibits that from bulk crystalline silicon in the implanted region. On the one hand, we assumed that the silicon carbide structure was not formed in the implanted region, and small defects formed because of the complex of carbon and interstitial silicon. On the other hand, large defects were hypothesized to originate from the recrystallization of the amorphous layer formed by high-dose carbon-cluster implantation. These defects are considered to contribute to the powerful gettering capability required for high-performance CMOS image sensors.

  14. Interferometric Reflectance Imaging Sensor (IRIS)—A Platform Technology for Multiplexed Diagnostics and Digital Detection

    PubMed Central

    Avci, Oguzhan; Lortlar Ünlü, Nese; Yalçın Özkumur, Ayça; Ünlü, M. Selim

    2015-01-01

    Over the last decade, the growing need in disease diagnostics has stimulated rapid development of new technologies with unprecedented capabilities. Recent emerging infectious diseases and epidemics have revealed the shortcomings of existing diagnostics tools, and the necessity for further improvements. Optical biosensors can lay the foundations for future generation diagnostics by providing means to detect biomarkers in a highly sensitive, specific, quantitative and multiplexed fashion. Here, we review an optical sensing technology, Interferometric Reflectance Imaging Sensor (IRIS), and the relevant features of this multifunctional platform for quantitative, label-free and dynamic detection. We discuss two distinct modalities for IRIS: (i) low-magnification (ensemble biomolecular mass measurements) and (ii) high-magnification (digital detection of individual nanoparticles) along with their applications, including label-free detection of multiplexed protein chips, measurement of single nucleotide polymorphism, quantification of transcription factor DNA binding, and high sensitivity digital sensing and characterization of nanoparticles and viruses. PMID:26205273

  15. Design of an ultrasonic micro-array for near field sensing during retinal microsurgery.

    PubMed

    Clarke, Clyde; Etienne-Cummings, Ralph

    2006-01-01

    A method for obtaining the optimal and specific sensor parameters for a tool-tip mountable ultrasonic transducer micro-array is presented. The ultrasonic transducer array sensor parameters, such as frequency of operation, element size, inter-element spacing, number of elements and transducer geometry are obtained using a quadratic programming method to obtain a maximum directivity while being constrained to a total array size of 4 mm2 and the required resolution for retinal imaging. The technique is used to design a uniformly spaced NxN transducer array that is capable of resolving structures in the retina that are as small as 2 microm from a distance of 100 microm. The resultant 37x37 array of 16 microm transducers with 26 microm spacing will be realized as a Capacitive Micromachined Ultrasonic Transducer (CMUT) array and used for imaging and robotic guidance during retinal microsurgery.

  16. REPORT ON AN ORBITAL MAPPING SYSTEM.

    USGS Publications Warehouse

    Colvocoresses, Alden P.; ,

    1984-01-01

    During June 1984, the International Society for Photogrammetry and Remote Sensing accepted a committee report that defines an Orbital Mapping System (OMS) to follow Landsat and other Earth-sensing systems. The OMS involves the same orbital parameters of Landsats 1, 2, and 3, three wave bands (two in the visible and one in the near infrared) and continuous stereoscopic capability. The sensors involve solid-state linear arrays and data acquisition (including stereo) designed for one-dimensional data processing. It has a resolution capability of 10-m pixels and is capable of producing 1:50,000-scale image maps with 20-m contours. In addition to mapping, the system is designed to monitor the works of man as well as nature and in a cost-effective manner.

  17. New amorphous-silicon image sensor for x-ray diagnostic medical imaging applications

    NASA Astrophysics Data System (ADS)

    Weisfield, Richard L.; Hartney, Mark A.; Street, Robert A.; Apte, Raj B.

    1998-07-01

    This paper introduces new high-resolution amorphous Silicon (a-Si) image sensors specifically configured for demonstrating film-quality medical x-ray imaging capabilities. The devices utilizes an x-ray phosphor screen coupled to an array of a-Si photodiodes for detecting visible light, and a-Si thin-film transistors (TFTs) for connecting the photodiodes to external readout electronics. We have developed imagers based on a pixel size of 127 micrometer X 127 micrometer with an approximately page-size imaging area of 244 mm X 195 mm, and array size of 1,536 data lines by 1,920 gate lines, for a total of 2.95 million pixels. More recently, we have developed a much larger imager based on the same pixel pattern, which covers an area of approximately 406 mm X 293 mm, with 2,304 data lines by 3,200 gate lines, for a total of nearly 7.4 million pixels. This is very likely to be the largest image sensor array and highest pixel count detector fabricated on a single substrate. Both imagers connect to a standard PC and are capable of taking an image in a few seconds. Through design rule optimization we have achieved a light sensitive area of 57% and optimized quantum efficiency for x-ray phosphor output in the green part of the spectrum, yielding an average quantum efficiency between 500 and 600 nm of approximately 70%. At the same time, we have managed to reduce extraneous leakage currents on these devices to a few fA per pixel, which allows for very high dynamic range to be achieved. We have characterized leakage currents as a function of photodiode bias, time and temperature to demonstrate high stability over these large sized arrays. At the electronics level, we have adopted a new generation of low noise, charge- sensitive amplifiers coupled to 12-bit A/D converters. Considerable attention was given to reducing electronic noise in order to demonstrate a large dynamic range (over 4,000:1) for medical imaging applications. Through a combination of low data lines capacitance, readout amplifier design, optimized timing, and noise cancellation techniques, we achieve 1,000e to 2,000e of noise for the page size and large size arrays, respectively. This allows for true 12-bit performance and quantum limited images over a wide range of x-ray exposures. Various approaches to reducing line correlated noise have been implemented and will be discussed. Images documenting the improved performance will be presented. Avenues for improvement are under development, including higher resolution 97 micrometer pixel imagers, further improvements in detective quantum efficiency, and characterization of dynamic behavior.

  18. Cartographic potential of SPOT image data

    NASA Technical Reports Server (NTRS)

    Welch, R.

    1985-01-01

    In late 1985, the SPOT (Systeme Probatoire d'Observation de la Terre) satellite is to be launched by the Ariane rocket from French Guiana. This satellite will have two High Resolution Visible (HRV) line array sensor systems which are capable of providing monoscopic and stereoscopic coverage of the earth. Cartographic applications are related to the recording of stereo image data and the acquisition of 20-m data in a multispectral mode. One of the objectives of this study involves a comparison of the suitability of SPOT and TM image data for mapping urban land use/cover. Another objective is concerned with a preliminary assessment of the potential of SPOT image data for map revision when merged with conventional map sheets converted to raster formats.

  19. Autofocus method for automated microscopy using embedded GPUs.

    PubMed

    Castillo-Secilla, J M; Saval-Calvo, M; Medina-Valdès, L; Cuenca-Asensi, S; Martínez-Álvarez, A; Sánchez, C; Cristóbal, G

    2017-03-01

    In this paper we present a method for autofocusing images of sputum smears taken from a microscope which combines the finding of the optimal focus distance with an algorithm for extending the depth of field (EDoF). Our multifocus fusion method produces an unique image where all the relevant objects of the analyzed scene are well focused, independently to their distance to the sensor. This process is computationally expensive which makes unfeasible its automation using traditional embedded processors. For this purpose a low-cost optimized implementation is proposed using limited resources embedded GPU integrated on cutting-edge NVIDIA system on chip. The extensive tests performed on different sputum smear image sets show the real-time capabilities of our implementation maintaining the quality of the output image.

  20. a Micro-Uav with the Capability of Direct Georeferencing

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Mabillard, R.; Skaloud, J.

    2013-08-01

    This paper presents the development of a low cost UAV (Unmanned Aerial Vehicle) with the capability of direct georeferencing. The advantage of such system lies in its high maneuverability, operation flexibility as well as capability to acquire image data without the need of establishing ground control points (GCPs). Moreover, the precise georeferencing offers an improvement in the final mapping accuracy when employing integrated sensor orientation. Such mode of operation limits the number and distribution of GCPs, which in turns save time in their signalization and surveying. Although the UAV systems feature high flexibility and capability of flying into areas that are inhospitable or inaccessible to humans, the lack of precision in positioning and attitude estimation on-board decrease the gained value of the captured imagery and limits their mode of operation to specific configurations and need of groundreference. Within a scope of this study we show the potential of present technologies in the field of position and orientation determination on a small UAV. The hardware implementation and especially the non-trivial synchronization of all components is clarified. Thanks to the implementation of a multi-frequency, low power GNSS receiver and its coupling with redundant MEMSIMU, we can attain the characteristic of a much larger systems flown on large carries while keeping the sensor size and weight suitable for MAV operations.

  1. The EO-1 hyperion and advanced land imager sensors for use in tundra classification studies within the Upper Kuparuk River Basin, Alaska

    NASA Astrophysics Data System (ADS)

    Hall-Brown, Mary

    The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were equivalent to the TM and ETM+ sensor (approximately 78%), the Hyperion could not obtain the accuracy of the SPOT 5 HRV sensor. However, the land cover classifications derived from the ALI sensor exceeded most classification accuracies derived from the TM and ETM+ senors and were even comparable to most SPOT 5 HRV classifications (87%). With the deactivation of the Landsat series satellites, the monitoring of remote locations such as in the Arctic on an uninterupted basis thoughout the world is in jeopardy. The utilization of the Hyperion and ALI sensors are a way to keep that endeavor operational. By keeping the ALI sensor active at all times, uninterupted observation of the entire Earth can be accomplished. Keeping the Hyperion sensor as a "tasked" sensor can provide scientists with additional imagery and options for their studies without overburdening storage issues.

  2. Detecting fiducials affected by trombone delay in ARC and the main laser alignment at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul A. S.; Bliss, Erlan S.; Miller Kamm, Victoria; Leach, Richard R.; Roberts, Randy; Rushford, Michael C.; Lowe-Webb, Roger; Wilhelmsen, Karl

    2015-09-01

    Four of the 192 beams of the National Ignition Facility (NIF) are currently being diverted into the Advanced Radiographic Capability (ARC) system to generate a sequence of short (1-50 picoseconds) 1053 nm laser pulses. When focused onto high Z wires in vacuum, these pulses create high energy x-ray pulses capable of penetrating the dense, imploding fusion fuel plasma during ignition scale experiments. The transmitted x-rays imaged with x-ray diagnostics can create movie radiographs that are expected to provide unprecedented insight into the implosion dynamics. The resulting images will serve as a diagnostic for tuning the experimental parameters towards successful fusion reactions. Beam delays introduced into the ARC pulses via independent, free-space optical trombones create the desired x-ray image sequence, or movie. However, these beam delays cause optical distortion of various alignment fiducials viewed by alignment sensors in the NIF and ARC beamlines. This work describes how the position of circular alignment fiducials is estimated in the presence of distortion.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheuermann, J; Howansky, A; Goldan, A

    Purpose: We present the first active matrix flat panel imager (AMFPI) capable of producing x-ray quantum noise limited images at low doses by overcoming the electronic noise through signal amplification by photoconductive avalanche gain (gav). The indirect detector fabricated uses an optical sensing layer of amorphous selenium (a-Se) known as High-Gain Avalanche Rushing Photoconductor (HARP). The detector design is called Scintillator HARP (SHARP)-AMFPI. This is the first image sensor to utilize solid-state HARP technology. Methods: The detector’s electronic readout is a 24 × 30 cm{sup 2} array of thin film transistors (TFT) with a pixel pitch of 85 µm. Themore » HARP structure consists of a 15 µm layer of a-Se isolated from the high voltage (HV) and signal electrode by a 2 µm thick hole blocking layer and electron blocking layer, respectively, to reduce dark current. A 150 µm thick structured CsI scintillator with reflective backing and a fiber optic faceplate (FOP) was coupled to the semi-transparent HV bias electrode of the HARP structure. Images were acquired using a 30 kVp Mo/Mo spectrum typically used in mammography. Results: Optical sensitivity measurements demonstrate that gav = 76 ± 5 can be achieved over the entire active area of the detector. At a constant dose to the detector of 6.67 µGy, image quality increases with gav until the effective electronic noise is negligible. Quantum noise limited images can be obtained with doses as low as 0.18 µGy. Conclusion: We demonstrate the feasibility of utilizing avalanche gain to overcome electronic noise. The indirect detector fabricated is the first solid-state imaging sensor to use HARP, and the largest active area HARP sensor to date. Our future work is to improve charge transport within the HARP structure and utilize a transparent HV electrode.« less

  4. Flash LIDAR Systems for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Dissly, Richard; Weinberg, J.; Weimer, C.; Craig, R.; Earhart, P.; Miller, K.

    2009-01-01

    Ball Aerospace offers a mature, highly capable 3D flash-imaging LIDAR system for planetary exploration. Multi mission applications include orbital, standoff and surface terrain mapping, long distance and rapid close-in ranging, descent and surface navigation and rendezvous and docking. Our flash LIDAR is an optical, time-of-flight, topographic imaging system, leveraging innovations in focal plane arrays, readout integrated circuit real time processing, and compact and efficient pulsed laser sources. Due to its modular design, it can be easily tailored to satisfy a wide range of mission requirements. Flash LIDAR offers several distinct advantages over traditional scanning systems. The entire scene within the sensor's field of view is imaged with a single laser flash. This directly produces an image with each pixel already correlated in time, making the sensor resistant to the relative motion of a target subject. Additionally, images may be produced at rates much faster than are possible with a scanning system. And because the system captures a new complete image with each flash, optical glint and clutter are easily filtered and discarded. This allows for imaging under any lighting condition and makes the system virtually insensitive to stray light. Finally, because there are no moving parts, our flash LIDAR system is highly reliable and has a long life expectancy. As an industry leader in laser active sensor system development, Ball Aerospace has been working for more than four years to mature flash LIDAR systems for space applications, and is now under contract to provide the Vision Navigation System for NASA's Orion spacecraft. Our system uses heritage optics and electronics from our star tracker products, and space qualified lasers similar to those used in our CALIPSO LIDAR, which has been in continuous operation since 2006, providing more than 1.3 billion laser pulses to date.

  5. Promise and Capability of NASA's Earth Observing System to Monitor Human-Induced Climate Variations

    NASA Technical Reports Server (NTRS)

    King, M. D.

    2003-01-01

    The Earth Observing System (EOS) is a space-based observing system comprised of a series of satellite sensors by which scientists can monitor the Earth, a Data and Information System (EOSDIS) enabling researchers worldwide to access the satellite data, and an interdisciplinary science research program to interpret the satellite data. The Moderate Resolution Imaging Spectroradiometer (MODIS), developed as part of the Earth Observing System (EOS) and launched on Terra in December 1999 and Aqua in May 2002, is designed to meet the scientific needs for satellite remote sensing of clouds, aerosols, water vapor, and land and ocean surface properties. This sensor and multi-platform observing system is especially well suited to observing detailed interdisciplinary components of the Earth s surface and atmosphere in and around urban environments, including aerosol optical properties, cloud optical and microphysical properties of both liquid water and ice clouds, land surface reflectance, fire occurrence, and many other properties that influence the urban environment and are influenced by them. In this presentation I will summarize the current capabilities of MODIS and other EOS sensors currently in orbit to study human-induced climate variations.

  6. A Review of Significant Advances in Neutron Imaging from Conception to the Present

    NASA Astrophysics Data System (ADS)

    Brenizer, J. S.

    This review summarizes the history of neutron imaging with a focus on the significant events and technical advancements in neutron imaging methods, from the first radiograph to more recent imaging methods. A timeline is presented to illustrate the key accomplishments that advanced the neutron imaging technique. Only three years after the discovery of the neutron by English physicist James Chadwick in 1932, neutron imaging began with the work of Hartmut Kallmann and Ernst Kuhn in Berlin, Germany, from 1935-1944. Kallmann and Kuhn were awarded a joint US Patent issued in January 1940. Little progress was made until the mid-1950's when Thewlis utilized a neutron beam from the BEPO reactor at Harwell, marking the beginning of the application of neutron imaging to practical applications. As the film method was improved, imaging moved from a qualitative to a quantitative technique, with applications in industry and in nuclear fuels. Standards were developed to aid in the quantification of the neutron images and the facility's capabilities. The introduction of dynamic neutron imaging (initially called real-time neutron radiography and neutron television) in the late 1970's opened the door to new opportunities and new challenges. As the electronic imaging matured, the introduction of the CCD imaging devices and solid-state light intensifiers helped address some of these challenges. Development of improved imaging devices for the medical community has had a major impact on neutron imaging. Additionally, amorphous silicon sensors provided improvements in temporal resolution, while providing a reasonably large imaging area. The development of new neutron imaging sensors and the development of new neutron imaging techniques in the past decade has advanced the technique's ability to provide insight and understanding of problems that other non-destructive techniques could not provide. This rapid increase in capability and application would not have been possible without the advances in computer processing speed and increased memory storage. For example, images with enhanced contrast are created by using the reflection, refraction, diffraction and ultra small angle scattering interactions. It is somewhat ironic that, like the first development of neutron images, the technique remains limited by the availability of high-intensity neutron sources, both in the facility cost and portability.

  7. Sub-Nanoliter Spectroscopic Gas Sensor

    PubMed Central

    Alfeeli, Bassam; Pickrell, Gary; Wang, Anbo

    2006-01-01

    In this work, a new type of optical fiber based chemical sensor, the sub-nanoliter sample cell (SNSC) based gas sensor, is described and compared to existing sensors designs in the literature. This novel SNSC gas sensor is shown to have the capability of gas detection with a cell volume in the sub-nanoliter range. Experimental results for various configurations of the sensor design are presented which demonstrate the capabilities of the miniature gas sensor.

  8. Proton-counting radiography for proton therapy: a proof of principle using CMOS APS technology

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Allinson, N. M.; Anaxagoras, T.; Esposito, M.; Green, S.; Manolopoulos, S.; Nieto-Camero, J.; Parker, D. J.; Price, T.; Evans, P. M.

    2014-06-01

    Despite the early recognition of the potential of proton imaging to assist proton therapy (Cormack 1963 J. Appl. Phys. 34 2722), the modality is still removed from clinical practice, with various approaches in development. For proton-counting radiography applications such as computed tomography (CT), the water-equivalent-path-length that each proton has travelled through an imaged object must be inferred. Typically, scintillator-based technology has been used in various energy/range telescope designs. Here we propose a very different alternative of using radiation-hard CMOS active pixel sensor technology. The ability of such a sensor to resolve the passage of individual protons in a therapy beam has not been previously shown. Here, such capability is demonstrated using a 36 MeV cyclotron beam (University of Birmingham Cyclotron, Birmingham, UK) and a 200 MeV clinical radiotherapy beam (iThemba LABS, Cape Town, SA). The feasibility of tracking individual protons through multiple CMOS layers is also demonstrated using a two-layer stack of sensors. The chief advantages of this solution are the spatial discrimination of events intrinsic to pixelated sensors, combined with the potential provision of information on both the range and residual energy of a proton. The challenges in developing a practical system are discussed.

  9. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  10. Proton-counting radiography for proton therapy: a proof of principle using CMOS APS technology

    PubMed Central

    Poludniowski, G; Allinson, N M; Anaxagoras, T; Esposito, M; Green, S; Manolopoulos, S; Nieto-Camero, J; Parker, D J; Price, T; Evans, P M

    2014-01-01

    Despite the early recognition of the potential of proton imaging to assist proton therapy the modality is still removed from clinical practice, with various approaches in development. For proton-counting radiography applications such as Computed Tomography (CT), the Water-Equivalent-Path-Length (WEPL) that each proton has travelled through an imaged object must be inferred. Typically, scintillator-based technology has been used in various energy/range telescope designs. Here we propose a very different alternative of using radiation-hard CMOS Active Pixel Sensor (APS) technology. The ability of such a sensor to resolve the passage of individual protons in a therapy beam has not been previously shown. Here, such capability is demonstrated using a 36 MeV cyclotron beam (University of Birmingham Cyclotron, Birmingham, UK) and a 200 MeV clinical radiotherapy beam (iThemba LABS, Cape Town, SA). The feasibility of tracking individual protons through multiple CMOS layers is also demonstrated using a two-layer stack of sensors. The chief advantages of this solution are the spatial discrimination of events intrinsic to pixelated sensors, combined with the potential provision of information on both the range and residual energy of a proton. The challenges in developing a practical system are discussed. PMID:24785680

  11. Thermal Image Sensing Model for Robotic Planning and Search.

    PubMed

    Castro Jiménez, Lídice E; Martínez-García, Edgar A

    2016-08-08

    This work presents a search planning system for a rolling robot to find a source of infra-red (IR) radiation at an unknown location. Heat emissions are observed by a low-cost home-made IR passive visual sensor. The sensor capability for detection of radiation spectra was experimentally characterized. The sensor data were modeled by an exponential model to estimate the distance as a function of the IR image's intensity, and, a polynomial model to estimate temperature as a function of IR intensities. Both theoretical models are combined to deduce a subtle nonlinear exact solution via distance-temperature. A planning system obtains feed back from the IR camera (position, intensity, and temperature) to lead the robot to find the heat source. The planner is a system of nonlinear equations recursively solved by a Newton-based approach to estimate the IR-source in global coordinates. The planning system assists an autonomous navigation control in order to reach the goal and avoid collisions. Trigonometric partial differential equations were established to control the robot's course towards the heat emission. A sine function produces attractive accelerations toward the IR source. A cosine function produces repulsive accelerations against the obstacles observed by an RGB-D sensor. Simulations and real experiments of complex indoor are presented to illustrate the convenience and efficacy of the proposed approach.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, Jesus

    Intelligent Optical Systems, Inc. has developed distributed intrinsic fiber optic sensors to directly quantify the concentration of dissolved or gas-phase CO 2 for leak detection or plume migration in carbon capture and sequestration (CCS). The capability of the sensor for highly sensitive detection of CO 2 in the pressure and temperature range of 15 to 2,000 psi and 25°C to 175°C was demonstrated, as was the capability of operating in highly corrosive and contaminated environments such as those often found in CO 2 injection sites. The novel sensor system was for the first time demonstrated deployed in a deep well,more » detecting multiple CO 2 releases, in real time, at varying depths. Early CO 2 release detection, by means of a sensor cable integrating multiple sensor segments, was demonstrated, as was the capability of quantifying the leak. The novel fiber optic sensor system exhibits capabilities not achieved by any other monitoring technology. This project represents a breakthrough in monitoring capabilities for CCS applications.« less

  13. Bioinspired Engineering of Exploration Systems (BEES) - its Impact on Future Missions

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Hine, Butler; Zornetzer, Steve

    2004-01-01

    This paper describes an overview of our "Bioinspired Engineering of Exploration Systems for Mars" ( "BEES for Mars") project. The BEES approach distills selected biologically inspired strategies utilizing motion cues/optic flow, bioinspired pattern recognition, biological visual and neural control systems, bioinspired sensing and communication techniques, and birds of prey inspired search and track algorithmic systems. Unique capabilities so enabled, provide potential solutions to future autonomous robotic space and planetary mission applications. With the first series of tests performed in September 2003, August 2004 and September 2004, we have demonstrated the BEES technologies at the El Mirage Dry Lakebed site in the Mojave Desert using Delta Wing experimental prototypes. We call these test flyers the "BEES flyer", since we are developing them as dedicated test platform for the newly developed bioinspired sensors, processors and algorithmic strategies. The Delta Wing offers a robust airframe that can sustain high G launches and offers ease of compact stowability and packaging along with scaling to small size and low ReynOld's number performance for a potential Mars deployment. Our approach to developing light weight, low power autonomous flight systems using concepts distilled from biology promises to enable new applications, of dual use to NASA and DoD needs. Small in size (0.5 -5 Kg) BEES Flyers are demonstrating capabilities for autonomous flight and sensor operability in Mars analog conditions. The BEES project team spans JPL, NASA Ames, Australian National University (ANU), Brigham Young University(BYU), DC Berkeiey, Analogic Computers Inc. and other institutions. The highlights from our recent flight demonstrations exhibiting new Mission enabling capabilities are described. Further, this paper describes two classes of potential new missions for Mars exploration: (1) the long range exploration missions, and (2) observation missions, for real time imaging of critical ephemeral phenomena, that can be enabled by use of BEES flyers. For example, such flyers can serve as a powerful black-box for critical descent and landing data and enablers for improved science missions complementing and supplementing the existing assets like landers and rovers by providing valuable exploration and quick extended low-altitude aerial coverage of the sites of interest by imaging them and distributing instruments to them. Imaging done by orbiters allows broad surface coverage at limited spatial resolution. Low altitude air-borne exploration of Mars offers a means for imaging large areas, perhaps up to several hundred kilometers, quickly and efficiently, providing a close-up birds-eye view of the planetary terrain and close-up approach to constrained difficult areas like canyons and craters. A novel approach to low-mass yet highly capable flyers is enabled by small aircraft equipped using sensors and processors and algorithms developed using BEES technology. This project is focused towards showing the direct impact of blending the best of artificial intelligence attributes and bioinspiration to create a leap beyond existing capability for our future Missions.

  14. AGSM Intelligent Devices/Smart Sensors Project

    NASA Technical Reports Server (NTRS)

    Harp, Janicce Leshay

    2014-01-01

    This project provides development and qualification of Smart Sensors capable of self-diagnosis and assessment of their capability/readiness to support operations. These sensors will provide pressure and temperature measurements to use in ground systems.

  15. Combat Vehicle Command and Control System Architecture Overview

    DTIC Science & Technology

    1994-10-01

    inserted in the software. • Interactive interface displays and controls were prepared using rapidly prototyped software and were retained at the MWTB for...being simulated "* controls , sensor displays, and out-the-window displays for the crew "* computer image generators (CIGs) for out-the-window and...black hot viewing modes. The commander may access a number of capabilities of the CITV simulation, described below, from controls located around the

  16. Simulated NASA Satellite Data Products for the NOAA Integrated Coral Reef Observation Network/Coral Reef Early Warning System

    NASA Technical Reports Server (NTRS)

    Estep, Leland; Spruce, Joseph P.

    2007-01-01

    This RPC (Rapid Prototyping Capability) experiment will demonstrate the use of VIIRS (Visible/Infrared Imager/Radiometer Suite) and LDCM (Landsat Data Continuity Mission) sensor data as significant input to the NOAA (National Oceanic and Atmospheric Administration) ICON/ CREWS (Integrated Coral Reef Observation System/Coral Reef Early Warning System). The project affects the Coastal Management Program Element of the Applied Sciences Program.

  17. Improving Air Force Imagery Reconnaissance Support to Ground Commanders.

    DTIC Science & Technology

    1983-06-03

    reconnaissance support in Southeast Asia due to the long response times of film recovery and 26 processing capabilities and inadequate command and control...reconnaissance is an integral part of the C31 information explosion. Traditional silver halide film products, chemically processed and manually distributed are...being replaced with electronic near-real-time (NRT) imaging sensors. The term "imagery" now includes not only conventional film based products (black

  18. Advanced electro-mechanical micro-shutters for thermal infrared night vision imaging and targeting systems

    NASA Astrophysics Data System (ADS)

    Durfee, David; Johnson, Walter; McLeod, Scott

    2007-04-01

    Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.

  19. Advanced illumination control algorithm for medical endoscopy applications

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.

    2015-05-01

    CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.

  20. The Quickest, Lowest-cost Lunar Resource Assessment Program: Integrated High-tech Earth-based Astronomy

    NASA Technical Reports Server (NTRS)

    Pieters, Carle M.

    1992-01-01

    Science and technology applications for the Moon have not fully kept pace with technical advancements in sensor development and analytical information extraction capabilities. Appropriate unanswered questions for the Moon abound, but until recently there has been little motivation to link sophisticated technical capabilities with specific measurement and analysis projects. Over the last decade enormous technical progress has been made in the development of (1) CCD photometric array detectors; (2) visible to near-infrared imaging spectrometers; (3)infrared spectroscopy; (4) high-resolution dual-polarization radar imaging at 3.5, 12, and 70 cm; and equally important (5) data analysis and information extraction techniques using compact powerful computers. Parts of each of these have been tested separately, but there has been no programmatic effort to develop and optimize instruments to meet lunar science and resource assessment needs (e.g., specific wavelength range, resolution, etc.) nor to coordinate activities so that the symbiotic relation between different kinds of data can be fully realized. No single type of remotely acquired data completely characterizes the lunar environment, but there has been little opportunity for integration of diverse advanced sensor data for the Moon. Two examples of technology concepts for lunar measurements are given. Using VIS/near-IR spectroscopy, the mineral composition of surface material can be derived from visible and near-infrared radiation reflected from the surface. The surface and subsurface scattering properties of the Moon can be analyzed using radar backscattering imaging.

  1. NASA COAST and OCEANIA Airborne Missions in Support of Ecosystem and Water Quality Research in the Coastal Zone

    NASA Technical Reports Server (NTRS)

    Guild, Liane S.; Hooker, Stanford B.; Kudela, Raphael; Morrow, John; Russell, Philip; Myers, Jeffrey; Dunagan, Stephen; Palacios, Sherry; Livingston, John; Negrey, Kendra; hide

    2015-01-01

    Worldwide, coastal marine ecosystems are exposed to land-based sources of pollution and sedimentation from anthropogenic activities including agriculture and coastal development. Ocean color products from satellite sensors provide information on chlorophyll (phytoplankton pigment), sediments, and colored dissolved organic material. Further, ship-based in-water measurements and emerging airborne measurements provide in situ data for the vicarious calibration of current and next generation satellite ocean color sensors and to validate the algorithms that use the remotely sensed observations. Recent NASA airborne missions over Monterey Bay, CA, have demonstrated novel above- and in-water measurement capabilities supporting a combined airborne sensor approach (imaging spectrometer, microradiometers, and a sun photometer). The results characterize coastal atmospheric and aquatic properties through an end-to-end assessment of image acquisition, atmospheric correction, algorithm application, plus sea-truth observations from state-of-the-art instrument systems. The primary goal of the airborne missions was to demonstrate the following in support of calibration and validation exercises for satellite coastal ocean color products: 1) the utility of a multi-sensor airborne instrument suite to assess the bio-optical properties of coastal California, including water quality; and 2) the importance of contemporaneous atmospheric measurements to improve atmospheric correction in the coastal zone. Utilizing an imaging spectrometer optimized in the blue to green spectral domain enables higher signal for detection of the relatively dark radiance measurements from marine and freshwater ecosystem features. The novel airborne instrument, Coastal Airborne In-situ Radiometers (C-AIR) provides measurements of apparent optical properties with high dynamic range and fidelity for deriving exact water leaving radiances at the land-ocean boundary, including radiometrically shallow aquatic ecosystems. Simultaneous measurements supporting empirical atmospheric correction of image data were accomplished using the Ames Airborne Tracking Sunphotometer (AATS-14). Flight operations are presented for the instrument payloads using the CIRPAS Twin Otter flown over Monterey Bay during the seasonal fall algal bloom in 2011 (COAST) and 2013 (OCEANIA) to support bio-optical measurements of phytoplankton for coastal zone research. Further, this airborne capability can be responsive to first flush rain events that deliver higher concentrations of sediments and pollution to coastal waters via watersheds and overland flow.

  2. Performance Evaluation Modeling of Network Sensors

    NASA Technical Reports Server (NTRS)

    Clare, Loren P.; Jennings, Esther H.; Gao, Jay L.

    2003-01-01

    Substantial benefits are promised by operating many spatially separated sensors collectively. Such systems are envisioned to consist of sensor nodes that are connected by a communications network. A simulation tool is being developed to evaluate the performance of networked sensor systems, incorporating such metrics as target detection probabilities, false alarms rates, and classification confusion probabilities. The tool will be used to determine configuration impacts associated with such aspects as spatial laydown, and mixture of different types of sensors (acoustic, seismic, imaging, magnetic, RF, etc.), and fusion architecture. The QualNet discrete-event simulation environment serves as the underlying basis for model development and execution. This platform is recognized for its capabilities in efficiently simulating networking among mobile entities that communicate via wireless media. We are extending QualNet's communications modeling constructs to capture the sensing aspects of multi-target sensing (analogous to multiple access communications), unimodal multi-sensing (broadcast), and multi-modal sensing (multiple channels and correlated transmissions). Methods are also being developed for modeling the sensor signal sources (transmitters), signal propagation through the media, and sensors (receivers) that are consistent with the discrete event paradigm needed for performance determination of sensor network systems. This work is supported under the Microsensors Technical Area of the Army Research Laboratory (ARL) Advanced Sensors Collaborative Technology Alliance.

  3. Enhanced modeling and simulation of EO/IR sensor systems

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Miller, Brian; May, Christopher

    2015-05-01

    The testing and evaluation process developed by the Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) provides end to end systems evaluation, testing, and training of EO/IR sensors. By combining NV-LabCap, the Night Vision Integrated Performance Model (NV-IPM), One Semi-Automated Forces (OneSAF) input sensor file generation, and the Night Vision Image Generator (NVIG) capabilities, NVESD provides confidence to the M&S community that EO/IR sensor developmental and operational testing and evaluation are accurately represented throughout the lifecycle of an EO/IR system. This new process allows for both theoretical and actual sensor testing. A sensor can be theoretically designed in NV-IPM, modeled in NV-IPM, and then seamlessly input into the wargames for operational analysis. After theoretical design, prototype sensors can be measured by using NV-LabCap, then modeled in NV-IPM and input into wargames for further evaluation. The measurement process to high fidelity modeling and simulation can then be repeated again and again throughout the entire life cycle of an EO/IR sensor as needed, to include LRIP, full rate production, and even after Depot Level Maintenance. This is a prototypical example of how an engineering level model and higher level simulations can share models to mutual benefit.

  4. Chemically engineered persistent luminescence nanoprobes for bioimaging

    PubMed Central

    Lécuyer, Thomas; Teston, Eliott; Ramirez-Garcia, Gonzalo; Maldiney, Thomas; Viana, Bruno; Seguin, Johanne; Mignet, Nathalie; Scherman, Daniel; Richard, Cyrille

    2016-01-01

    Imaging nanoprobes are a group of nanosized agents developed for providing improved contrast for bioimaging. Among various imaging probes, optical sensors capable of following biological events or progresses at the cellular and molecular levels are actually actively developed for early detection, accurate diagnosis, and monitoring of the treatment of diseases. The optical activities of nanoprobes can be tuned on demand by chemists by engineering their composition, size and surface nature. This review will focus on researches devoted to the conception of nanoprobes with particular optical properties, called persistent luminescence, and their use as new powerful bioimaging agents in preclinical assays. PMID:27877248

  5. Parallel evolution of image processing tools for multispectral imagery

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.

    2000-11-01

    We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.

  6. Polarizing aperture stereoscopic cinema camera

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny

    2012-03-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.

  7. Polarizing aperture stereoscopic cinema camera

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  8. Visual attitude propagation for small satellites

    NASA Astrophysics Data System (ADS)

    Rawashdeh, Samir A.

    As electronics become smaller and more capable, it has become possible to conduct meaningful and sophisticated satellite missions in a small form factor. However, the capability of small satellites and the range of possible applications are limited by the capabilities of several technologies, including attitude determination and control systems. This dissertation evaluates the use of image-based visual attitude propagation as a compliment or alternative to other attitude determination technologies that are suitable for miniature satellites. The concept lies in using miniature cameras to track image features across frames and extracting the underlying rotation. The problem of visual attitude propagation as a small satellite attitude determination system is addressed from several aspects: related work, algorithm design, hardware and performance evaluation, possible applications, and on-orbit experimentation. These areas of consideration reflect the organization of this dissertation. A "stellar gyroscope" is developed, which is a visual star-based attitude propagator that uses relative motion of stars in an imager's field of view to infer the attitude changes. The device generates spacecraft relative attitude estimates in three degrees of freedom. Algorithms to perform the star detection, correspondence, and attitude propagation are presented. The Random Sample Consensus (RANSAC) approach is applied to the correspondence problem to successfully pair stars across frames while mitigating falsepositive and false-negative star detections. This approach provides tolerance to the noise levels expected in using miniature optics and no baffling, and the noise caused by radiation dose on orbit. The hardware design and algorithms are validated using test images of the night sky. The application of the stellar gyroscope as part of a CubeSat attitude determination and control system is described. The stellar gyroscope is used to augment a MEMS gyroscope attitude propagation algorithm to minimize drift in the absence of an absolute attitude sensor. The stellar gyroscope is a technology demonstration experiment on KySat-2, a 1-Unit CubeSat being developed in Kentucky that is in line to launch with the NASA ELaNa CubeSat Launch Initiative. It has also been adopted by industry as a sensor for CubeSat Attitude Determination and Control Systems (ADCS). KEYWORDS: Small Satellites, Attitude Determination, Egomotion Estimation, RANSAC, Image Processing.

  9. Use of the moon to support on-orbit sensor calibration for climate change measurements

    USGS Publications Warehouse

    Stone, T.C.; Kieffer, H.H.

    2006-01-01

    Production of reliable climate datasets from multiple observational measurements acquired by remote sensing satellite systems available now and in the future places stringent requirements on the stability of sensors and consistency among the instruments and platforms. Detecting trends in environmental parameters measured at solar reflectance wavelengths (0.3 to 2.5 microns) requires on-orbit instrument stability at a level of 1% over a decade. This benchmark can be attained using the Moon as a radiometric reference. The lunar calibration program at the U.S. Geological Survey has an operational model to predict the lunar spectral irradiance with precision ???1%, explicitly accounting for the effects of phase, lunar librations, and the lunar surface photometric function. A system for utilization of the Moon by on-orbit instruments has been established. With multiple lunar views taken by a spacecraft instrument, sensor response characterization with sub-percent precision over several years has been achieved. Meteorological satellites in geostationary orbit (GEO) capture the Moon in operational images; applying lunar calibration to GEO visible-channel image archives has the potential to develop a climate record extending decades into the past. The USGS model and system can provide reliable transfer of calibration among instruments that have viewed the Moon as a common source. This capability will be enhanced with improvements to the USGS model absolute scale. Lunar calibration may prove essential to the critical calibration needs to cover a potential gap in observational capabilities prior to deployment of NPP/NPOESS. A key requirement is that current and future instruments observe the Moon.

  10. Concept and integration of an on-line quasi-operational airborne hyperspectral remote sensing system

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Lenz, Andreas; Gross, Wolfgang; Perpeet, Dominik; Wuttke, Sebastian; Middelmann, Wolfgang

    2013-10-01

    Modern mission characteristics require the use of advanced imaging sensors in reconnaissance. In particular, high spatial and high spectral resolution imaging provides promising data for many tasks such as classification and detecting objects of military relevance, such as camouflaged units or improvised explosive devices (IEDs). Especially in asymmetric warfare with highly mobile forces, intelligence, surveillance and reconnaissance (ISR) needs to be available close to real-time. This demands the use of unmanned aerial vehicles (UAVs) in combination with downlink capability. The system described in this contribution is integrated in a wing pod for ease of installation and calibration. It is designed for the real-time acquisition and analysis of hyperspectral data. The main component is a Specim AISA Eagle II hyperspectral sensor, covering the visible and near-infrared (VNIR) spectral range with a spectral resolution up to 1.2 nm and 1024 pixel across track, leading to a ground sampling distance below 1 m at typical altitudes. The push broom characteristic of the hyperspectral sensor demands an inertial navigation system (INS) for rectification and georeferencing of the image data. Additional sensors are a high resolution RGB (HR-RGB) frame camera and a thermal imaging camera. For on-line application, the data is preselected, compressed and transmitted to the ground control station (GCS) by an existing system in a second wing pod. The final result after data processing in the GCS is a hyperspectral orthorectified GeoTIFF, which is filed in the ERDAS APOLLO geographical information system. APOLLO allows remote access to the data and offers web-based analysis tools. The system is quasi-operational and was successfully tested in May 2013 in Bremerhaven, Germany.

  11. Miniaturized unified imaging system using bio-inspired fluidic lens

    NASA Astrophysics Data System (ADS)

    Tsai, Frank S.; Cho, Sung Hwan; Qiao, Wen; Kim, Nam-Hyong; Lo, Yu-Hwa

    2008-08-01

    Miniaturized imaging systems have become ubiquitous as they are found in an ever-increasing number of devices, such as cellular phones, personal digital assistants, and web cameras. Until now, the design and fabrication methodology of such systems have not been significantly different from conventional cameras. The only established method to achieve focusing is by varying the lens distance. On the other hand, the variable-shape crystalline lens found in animal eyes offers inspiration for a more natural way of achieving an optical system with high functionality. Learning from the working concepts of the optics in the animal kingdom, we developed bio-inspired fluidic lenses for a miniature universal imager with auto-focusing, macro, and super-macro capabilities. Because of the enormous dynamic range of fluidic lenses, the miniature camera can even function as a microscope. To compensate for the image quality difference between the central vision and peripheral vision and the shape difference between a solid-state image sensor and a curved retina, we adopted a hybrid design consisting of fluidic lenses for tunability and fixed lenses for aberration and color dispersion correction. A design of the world's smallest surgical camera with 3X optical zoom capabilities is also demonstrated using the approach of hybrid lenses.

  12. Smart CMOS sensor for wideband laser threat detection

    NASA Astrophysics Data System (ADS)

    Schwarze, Craig R.; Sonkusale, Sameer

    2015-09-01

    The proliferation of lasers has led to their widespread use in applications ranging from short range standoff chemical detection to long range Lidar sensing and target designation operating across the UV to LWIR spectrum. Recent advances in high energy lasers have renewed the development of laser weapons systems. The ability to measure and assess laser source information is important to both identify a potential threat as well as determine safety and nominal hazard zone (NHZ). Laser detection sensors are required that provide high dynamic range, wide spectral coverage, pulsed and continuous wave detection, and large field of view. OPTRA, Inc. and Tufts have developed a custom ROIC smart pixel imaging sensor architecture and wavelength encoding optics for measurement of source wavelength, pulse length, pulse repetition frequency (PRF), irradiance, and angle of arrival. The smart architecture provides dual linear and logarithmic operating modes to provide 8+ orders of signal dynamic range and nanosecond pulse measurement capability that can be hybridized with the appropriate detector array to provide UV through LWIR laser sensing. Recent advances in sputtering techniques provide the capability for post-processing CMOS dies from the foundry and patterning PbS and PbSe photoconductors directly on the chip to create a single monolithic sensor array architecture for measuring sources operating from 0.26 - 5.0 microns, 1 mW/cm2 - 2 kW/cm2.

  13. Simulation of the hyperspectral data from multispectral data using Python programming language

    NASA Astrophysics Data System (ADS)

    Tiwari, Varun; Kumar, Vinay; Pandey, Kamal; Ranade, Rigved; Agarwal, Shefali

    2016-04-01

    Multispectral remote sensing (MRS) sensors have proved their potential in acquiring and retrieving information of Land Use Land (LULC) Cover features in the past few decades. These MRS sensor generally acquire data within limited broad spectral bands i.e. ranging from 3 to 10 number of bands. The limited number of bands and broad spectral bandwidth in MRS sensors becomes a limitation in detailed LULC studies as it is not capable of distinguishing spectrally similar LULC features. On the counterpart, fascinating detailed information available in hyperspectral (HRS) data is spectrally over determined and able to distinguish spectrally similar material of the earth surface. But presently the availability of HRS sensors is limited. This is because of the requirement of sensitive detectors and large storage capability, which makes the acquisition and processing cumbersome and exorbitant. So, there arises a need to utilize the available MRS data for detailed LULC studies. Spectral reconstruction approach is one of the technique used for simulating hyperspectral data from available multispectral data. In the present study, spectral reconstruction approach is utilized for the simulation of hyperspectral data using EO-1 ALI multispectral data. The technique is implemented using python programming language which is open source in nature and possess support for advanced imaging processing libraries and utilities. Over all 70 bands have been simulated and validated using visual interpretation, statistical and classification approach.

  14. Absolute Calibration of Optical Satellite Sensors Using Libya 4 Pseudo Invariant Calibration Site

    NASA Technical Reports Server (NTRS)

    Mishra, Nischal; Helder, Dennis; Angal, Amit; Choi, Jason; Xiong, Xiaoxiong

    2014-01-01

    The objective of this paper is to report the improvements in an empirical absolute calibration model developed at South Dakota State University using Libya 4 (+28.55 deg, +23.39 deg) pseudo invariant calibration site (PICS). The approach was based on use of the Terra MODIS as the radiometer to develop an absolute calibration model for the spectral channels covered by this instrument from visible to shortwave infrared. Earth Observing One (EO-1) Hyperion, with a spectral resolution of 10 nm, was used to extend the model to cover visible and near-infrared regions. A simple Bidirectional Reflectance Distribution function (BRDF) model was generated using Terra Moderate Resolution Imaging Spectroradiometer (MODIS) observations over Libya 4 and the resulting model was validated with nadir data acquired from satellite sensors such as Aqua MODIS and Landsat 7 (L7) Enhanced Thematic Mapper (ETM+). The improvements in the absolute calibration model to account for the BRDF due to off-nadir measurements and annual variations in the atmosphere are summarized. BRDF models due to off-nadir viewing angles have been derived using the measurements from EO-1 Hyperion. In addition to L7 ETM+, measurements from other sensors such as Aqua MODIS, UK-2 Disaster Monitoring Constellation (DMC), ENVISAT Medium Resolution Imaging Spectrometer (MERIS) and Operational Land Imager (OLI) onboard Landsat 8 (L8), which was launched in February 2013, were employed to validate the model. These satellite sensors differ in terms of the width of their spectral bandpasses, overpass time, off-nadir-viewing capabilities, spatial resolution and temporal revisit time, etc. The results demonstrate that the proposed empirical calibration model has accuracy of the order of 3% with an uncertainty of about 2% for the sensors used in the study.

  15. Assessment of COTS IR image simulation tools for ATR development

    NASA Astrophysics Data System (ADS)

    Seidel, Heiko; Stahl, Christoph; Bjerkeli, Frode; Skaaren-Fystro, Paal

    2005-05-01

    Following the tendency of increased use of imaging sensors in military aircraft, future fighter pilots will need onboard artificial intelligence e.g. ATR for aiding them in image interpretation and target designation. The European Aeronautic Defence and Space Company (EADS) in Germany has developed an advanced method for automatic target recognition (ATR) which is based on adaptive neural networks. This ATR method can assist the crew of military aircraft like the Eurofighter in sensor image monitoring and thereby reduce the workload in the cockpit and increase the mission efficiency. The EADS ATR approach can be adapted for imagery of visual, infrared and SAR sensors because of the training-based classifiers of the ATR method. For the optimal adaptation of these classifiers they have to be trained with appropriate and sufficient image data. The training images must show the target objects from different aspect angles, ranges, environmental conditions, etc. Incomplete training sets lead to a degradation of classifier performance. Additionally, ground truth information i.e. scenario conditions like class type and position of targets is necessary for the optimal adaptation of the ATR method. In Summer 2003, EADS started a cooperation with Kongsberg Defence & Aerospace (KDA) from Norway. The EADS/KDA approach is to provide additional image data sets for training-based ATR through IR image simulation. The joint study aims to investigate the benefits of enhancing incomplete training sets for classifier adaptation by simulated synthetic imagery. EADS/KDA identified the requirements of a commercial-off-the-shelf IR simulation tool capable of delivering appropriate synthetic imagery for ATR development. A market study of available IR simulation tools and suppliers was performed. After that the most promising tool was benchmarked according to several criteria e.g. thermal emission model, sensor model, targets model, non-radiometric image features etc., resulting in a recommendation. The synthetic image data that are used for the investigation are generated using the recommended tool. Within the scope of this study, ATR performance on IR imagery using classifiers trained on real, synthetic and mixed image sets was evaluated. The performance of the adapted classifiers is assessed using recorded IR imagery with known ground-truth and recommendations are given for the use of COTS IR image simulation tools for ATR development.

  16. Advanced Land Imager Assessment System

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Choate, Mike; Christopherson, Jon; Hollaren, Doug; Morfitt, Ron; Nelson, Jim; Nelson, Shar; Storey, James; Helder, Dennis; Ruggles, Tim; hide

    2008-01-01

    The Advanced Land Imager Assessment System (ALIAS) supports radiometric and geometric image processing for the Advanced Land Imager (ALI) instrument onboard NASA s Earth Observing-1 (EO-1) satellite. ALIAS consists of two processing subsystems for radiometric and geometric processing of the ALI s multispectral imagery. The radiometric processing subsystem characterizes and corrects, where possible, radiometric qualities including: coherent, impulse; and random noise; signal-to-noise ratios (SNRs); detector operability; gain; bias; saturation levels; striping and banding; and the stability of detector performance. The geometric processing subsystem and analysis capabilities support sensor alignment calibrations, sensor chip assembly (SCA)-to-SCA alignments and band-to-band alignment; and perform geodetic accuracy assessments, modulation transfer function (MTF) characterizations, and image-to-image characterizations. ALIAS also characterizes and corrects band-toband registration, and performs systematic precision and terrain correction of ALI images. This system can geometrically correct, and automatically mosaic, the SCA image strips into a seamless, map-projected image. This system provides a large database, which enables bulk trending for all ALI image data and significant instrument telemetry. Bulk trending consists of two functions: Housekeeping Processing and Bulk Radiometric Processing. The Housekeeping function pulls telemetry and temperature information from the instrument housekeeping files and writes this information to a database for trending. The Bulk Radiometric Processing function writes statistical information from the dark data acquired before and after the Earth imagery and the lamp data to the database for trending. This allows for multi-scene statistical analyses.

  17. Remote sensing of environmental impact of land use activities

    NASA Technical Reports Server (NTRS)

    Paul, C. K.

    1977-01-01

    The capability to monitor land cover, associated in the past with aerial film cameras and radar systems, was discussed in regard to aircraft and spacecraft multispectral scanning sensors. A proposed thematic mapper with greater spectral and spatial resolutions for the fourth LANDSAT is expected to usher in new environmental monitoring capability. In addition, continuing improvements in image classification by supervised and unsupervised computer techniques are being operationally verified for discriminating environmental impacts of human activities on the land. The benefits of employing remote sensing for this discrimination was shown to far outweigh the incremental costs of converting to an aircraft-satellite multistage system.

  18. Earth resources mission performance studies. Volume 2: Simulation results

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Simulations were made at three month intervals to investigate the EOS mission performance over the four seasons of the year. The basic objectives of the study were: (1) to evaluate the ability of an EOS type system to meet a representative set of specific collection requirements, and (2) to understand the capabilities and limitations of the EOS that influence the system's ability to satisfy certain collection objectives. Although the results were obtained from a consideration of a two sensor EOS system, the analysis can be applied to any remote sensing system having similar optical and operational characteristics. While the category related results are applicable only to the specified requirement configuration, the results relating to general capability and limitations of the sensors can be applied in extrapolating to other U.S. based EOS collection requirements. The TRW general purpose mission simulator and analytic techniques discussed in this report can be applied to a wide range of collection and planning problems of earth orbiting imaging systems.

  19. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  20. Detection systems for mass spectrometry imaging: a perspective on novel developments with a focus on active pixel detectors.

    PubMed

    Jungmann, Julia H; Heeren, Ron M A

    2013-01-15

    Instrumental developments for imaging and individual particle detection for biomolecular mass spectrometry (imaging) and fundamental atomic and molecular physics studies are reviewed. Ion-counting detectors, array detection systems and high mass detectors for mass spectrometry (imaging) are treated. State-of-the-art detection systems for multi-dimensional ion, electron and photon detection are highlighted. Their application and performance in three different imaging modes--integrated, selected and spectral image detection--are described. Electro-optical and microchannel-plate-based systems are contrasted. The analytical capabilities of solid-state pixel detectors--both charge coupled device (CCD) and complementary metal oxide semiconductor (CMOS) chips--are introduced. The Medipix/Timepix detector family is described as an example of a CMOS hybrid active pixel sensor. Alternative imaging methods for particle detection and their potential for future applications are investigated. Copyright © 2012 John Wiley & Sons, Ltd.

Top