Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.
2016-01-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941
Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S
2016-09-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.
Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun
2014-11-01
A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.
CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel
Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun
2014-01-01
A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments. PMID:25426316
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.
Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K
2016-07-20
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors
Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.
2016-01-01
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David
2017-10-01
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.
1993-01-01
A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.
CMOS Image Sensors: Electronic Camera On A Chip
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.
Jeong, Y J; Oh, T I; Woo, E J; Kim, K J
2017-07-01
Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.
A design of driving circuit for star sensor imaging camera
NASA Astrophysics Data System (ADS)
Li, Da-wei; Yang, Xiao-xu; Han, Jun-feng; Liu, Zhao-hui
2016-01-01
The star sensor is a high-precision attitude sensitive measuring instruments, which determine spacecraft attitude by detecting different positions on the celestial sphere. Imaging camera is an important portion of star sensor. The purpose of this study is to design a driving circuit based on Kodak CCD sensor. The design of driving circuit based on Kodak KAI-04022 is discussed, and the timing of this CCD sensor is analyzed. By the driving circuit testing laboratory and imaging experiments, it is found that the driving circuits can meet the requirements of Kodak CCD sensor.
Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung
2017-10-02
Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.
NASA Astrophysics Data System (ADS)
Masuzawa, Tomoaki; Neo, Yoichiro; Mimura, Hidenori; Okamoto, Tamotsu; Nagao, Masayoshi; Akiyoshi, Masafumi; Sato, Nobuhiro; Takagi, Ikuji; Tsuji, Hiroshi; Gotoh, Yasuhito
2016-10-01
A growing demand on incident detection is recognized since the Great East Japan Earthquake and successive accidents in Fukushima nuclear power plant in 2011. Radiation tolerant image sensors are powerful tools to collect crucial information at initial stages of such incidents. However, semiconductor based image sensors such as CMOS and CCD have limited tolerance to radiation exposure. Image sensors used in nuclear facilities are conventional vacuum tubes using thermal cathodes, which have large size and high power consumption. In this study, we propose a compact image sensor composed of a CdTe-based photodiode and a matrix-driven Spindt-type electron beam source called field emitter array (FEA). A basic principle of FEA-based image sensors is similar to conventional Vidicon type camera tubes, but its electron source is replaced from a thermal cathode to FEA. The use of a field emitter as an electron source should enable significant size reduction while maintaining high radiation tolerance. Current researches on radiation tolerant FEAs and development of CdTe based photoconductive films will be presented.
High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.
Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi
2010-12-15
A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Space-based infrared sensors of space target imaging effect analysis
NASA Astrophysics Data System (ADS)
Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang
2018-02-01
Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.
Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy
2018-04-01
Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.
Fully wireless pressure sensor based on endoscopy images
NASA Astrophysics Data System (ADS)
Maeda, Yusaku; Mori, Hirohito; Nakagawa, Tomoaki; Takao, Hidekuni
2018-04-01
In this paper, the result of developing a fully wireless pressure sensor based on endoscopy images for an endoscopic surgery is reported for the first time. The sensor device has structural color with a nm-scale narrow gap, and the gap is changed by air pressure. The structural color of the sensor is acquired from camera images. Pressure detection can be realized with existing endoscope configurations only. The inner air pressure of the human body should be measured under flexible-endoscope operation using the sensor. Air pressure monitoring, has two important purposes. The first is to quantitatively measure tumor size under a constant air pressure for treatment selection. The second purpose is to prevent the endangerment of a patient due to over transmission of air. The developed sensor was evaluated, and the detection principle based on only endoscopy images has been successfully demonstrated.
Vision communications based on LED array and imaging sensor
NASA Astrophysics Data System (ADS)
Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Detection of Obstacles in Monocular Image Sequences
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia
1997-01-01
The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.
NASA Astrophysics Data System (ADS)
Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.
2017-03-01
The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.
Event-based Sensing for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Cohen, G.; Afshar, S.; van Schaik, A.; Wabnitz, A.; Bessell, T.; Rutten, M.; Morreale, B.
A revolutionary type of imaging device, known as a silicon retina or event-based sensor, has recently been developed and is gaining in popularity in the field of artificial vision systems. These devices are inspired by a biological retina and operate in a significantly different way to traditional CCD-based imaging sensors. While a CCD produces frames of pixel intensities, an event-based sensor produces a continuous stream of events, each of which is generated when a pixel detects a change in log light intensity. These pixels operate asynchronously and independently, producing an event-based output with high temporal resolution. There are also no fixed exposure times, allowing these devices to offer a very high dynamic range independently for each pixel. Additionally, these devices offer high-speed, low power operation and a sparse spatiotemporal output. As a consequence, the data from these sensors must be interpreted in a significantly different way to traditional imaging sensors and this paper explores the advantages this technology provides for space imaging. The applicability and capabilities of event-based sensors for SSA applications are demonstrated through telescope field trials. Trial results have confirmed that the devices are capable of observing resident space objects from LEO through to GEO orbital regimes. Significantly, observations of RSOs were made during both day-time and nighttime (terminator) conditions without modification to the camera or optics. The event based sensor’s ability to image stars and satellites during day-time hours offers a dramatic capability increase for terrestrial optical sensors. This paper shows the field testing and validation of two different architectures of event-based imaging sensors. An eventbased sensor’s asynchronous output has an intrinsically low data-rate. In addition to low-bandwidth communications requirements, the low weight, low-power and high-speed make them ideally suitable to meeting the demanding challenges required by space-based SSA systems. Results from these experiments and the systems developed highlight the applicability of event-based sensors to ground and space-based SSA tasks.
CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.
Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V
2010-12-01
We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald
2012-01-01
An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.
CMOS image sensor-based immunodetection by refractive-index change.
Devadhasan, Jasmine P; Kim, Sanghyo
2012-01-01
A complementary metal oxide semiconductor (CMOS) image sensor is an intriguing technology for the development of a novel biosensor. Indeed, the CMOS image sensor mechanism concerning the detection of the antigen-antibody (Ag-Ab) interaction at the nanoscale has been ambiguous so far. To understand the mechanism, more extensive research has been necessary to achieve point-of-care diagnostic devices. This research has demonstrated a CMOS image sensor-based analysis of cardiovascular disease markers, such as C-reactive protein (CRP) and troponin I, Ag-Ab interactions on indium nanoparticle (InNP) substrates by simple photon count variation. The developed sensor is feasible to detect proteins even at a fg/mL concentration under ordinary room light. Possible mechanisms, such as dielectric constant and refractive-index changes, have been studied and proposed. A dramatic change in the refractive index after protein adsorption on an InNP substrate was observed to be a predominant factor involved in CMOS image sensor-based immunoassay.
Performance test and image correction of CMOS image sensor in radiation environment
NASA Astrophysics Data System (ADS)
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2016-09-01
CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.
Photodiode area effect on performance of X-ray CMOS active pixel sensors
NASA Astrophysics Data System (ADS)
Kim, M. S.; Kim, Y.; Kim, G.; Lim, K. T.; Cho, G.; Kim, D.
2018-02-01
Compared to conventional TFT-based X-ray imaging devices, CMOS-based X-ray imaging sensors are considered next generation because they can be manufactured in very small pixel pitches and can acquire high-speed images. In addition, CMOS-based sensors have the advantage of integration of various functional circuits within the sensor. The image quality can also be improved by the high fill-factor in large pixels. If the size of the subject is small, the size of the pixel must be reduced as a consequence. In addition, the fill factor must be reduced to aggregate various functional circuits within the pixel. In this study, 3T-APS (active pixel sensor) with photodiodes of four different sizes were fabricated and evaluated. It is well known that a larger photodiode leads to improved overall performance. Nonetheless, if the size of the photodiode is > 1000 μm2, the degree to which the sensor performance increases as the photodiode size increases, is reduced. As a result, considering the fill factor, pixel-pitch > 32 μm is not necessary to achieve high-efficiency image quality. In addition, poor image quality is to be expected unless special sensor-design techniques are included for sensors with a pixel pitch of 25 μm or less.
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-03-13
With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS) image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET) of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.
A time-resolved image sensor for tubeless streak cameras
NASA Astrophysics Data System (ADS)
Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji
2014-03-01
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .
Star centroiding error compensation for intensified star sensors.
Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun
2016-12-26
A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-01-01
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes. PMID:28208781
Intelligent Luminance Control of Lighting Systems Based on Imaging Sensor Feedback.
Liu, Haoting; Zhou, Qianxiang; Yang, Jin; Jiang, Ting; Liu, Zhizhen; Li, Jie
2017-02-09
An imaging sensor-based intelligent Light Emitting Diode (LED) lighting system for desk use is proposed. In contrast to the traditional intelligent lighting system, such as the photosensitive resistance sensor-based or the infrared sensor-based system, the imaging sensor can realize a finer perception of the environmental light; thus it can guide a more precise lighting control. Before this system works, first lots of typical imaging lighting data of the desk application are accumulated. Second, a series of subjective and objective Lighting Effect Evaluation Metrics (LEEMs) are defined and assessed for these datasets above. Then the cluster benchmarks of these objective LEEMs can be obtained. Third, both a single LEEM-based control and a multiple LEEMs-based control are developed to realize a kind of optimal luminance tuning. When this system works, first it captures the lighting image using a wearable camera. Then it computes the objective LEEMs of the captured image and compares them with the cluster benchmarks of the objective LEEMs. Finally, the single LEEM-based or the multiple LEEMs-based control can be implemented to get a kind of optimal lighting effect. Many experiment results have shown the proposed system can tune the LED lamp automatically according to environment luminance changes.
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
Sensor fusion for synthetic vision
NASA Technical Reports Server (NTRS)
Pavel, M.; Larimer, J.; Ahumada, A.
1991-01-01
Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.
2015-11-05
AFRL-AFOSR-VA-TR-2015-0359 Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging Viktor Gruev...To) 02/15/2011 - 08/15/2015 4. TITLE AND SUBTITLE Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast...investigate alternative spectral imaging architectures based on my previous experience in this research area. I will develop nanowire polarization
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-01-01
With their significant features, the applications of complementary metal-oxide semiconductor (CMOS) image sensors covers a very extensive range, from industrial automation to traffic applications such as aiming systems, blind guidance, active/passive range finders, etc. In this paper CMOS image sensor-based active and passive range finders are presented. The measurement scheme of the proposed active/passive range finders is based on a simple triangulation method. The designed range finders chiefly consist of a CMOS image sensor and some light sources such as lasers or LEDs. The implementation cost of our range finders is quite low. Image processing software to adjust the exposure time (ET) of the CMOS image sensor to enhance the performance of triangulation-based range finders was also developed. An extensive series of experiments were conducted to evaluate the performance of the designed range finders. From the experimental results, the distance measurement resolutions achieved by the active range finder and the passive range finder can be better than 0.6% and 0.25% within the measurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests on applications of the developed CMOS image sensor-based range finders to the automotive field were also conducted. The experimental results demonstrated that our range finders are well-suited for distance measurements in this field. PMID:27879789
Advanced sensor-simulation capability
NASA Astrophysics Data System (ADS)
Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.
1990-09-01
This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.
A mobile ferromagnetic shape detection sensor using a Hall sensor array and magnetic imaging.
Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah
2011-01-01
This paper presents a mobile Hall sensor array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the mobile Hall sensor array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of mobile Hall sensor array system for actual shape detection. The results prove that the mobile Hall sensor array system is able to perform magnetic imaging in identifying various ferromagnetic materials.
A Mobile Ferromagnetic Shape Detection Sensor Using a Hall Sensor Array and Magnetic Imaging
Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah
2011-01-01
This paper presents a Mobile Hall Sensor Array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the Mobile Hall Sensor Array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of Mobile Hall Sensor Array system for actual shape detection. The results prove that the Mobile Hall Sensor Array system is able to perform magnetic imaging in identifying various ferromagnetic materials. PMID:22346653
Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network
Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh
2014-01-01
This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions. PMID:25171121
Image-based environmental monitoring sensor application using an embedded wireless sensor network.
Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh
2014-08-28
This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.
Image Processing Occupancy Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less
A safety monitoring system for taxi based on CMOS imager
NASA Astrophysics Data System (ADS)
Liu, Zhi
2005-01-01
CMOS image sensors now become increasingly competitive with respect to their CCD counterparts, while adding advantages such as no blooming, simpler driving requirements and the potential of on-chip integration of sensor, analogue circuitry, and digital processing functions. A safety monitoring system for taxi based on cmos imager that can record field situation when unusual circumstance happened is described in this paper. The monitoring system is based on a CMOS imager (OV7120), which can output digital image data through parallel pixel data port. The system consists of a CMOS image sensor, a large capacity NAND FLASH ROM, a USB interface chip and a micro controller (AT90S8515). The structure of whole system and the test data is discussed and analyzed in detail.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.
Zhang, Jiachao; Hirakawa, Keigo
2017-04-01
This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia
2014-01-01
For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210
Image Sensors Enhance Camera Technologies
NASA Technical Reports Server (NTRS)
2010-01-01
In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.
Evaluation and comparison of the IRS-P6 and the landsat sensors
Chander, G.; Coan, M.J.; Scaramuzza, P.L.
2008-01-01
The Indian Remote Sensing Satellite (IRS-P6), also called ResourceSat-1, was launched in a polar sun-synchronous orbit on October 17, 2003. It carries three sensors: the highresolution Linear Imaging Self-Scanner (LISS-IV), the mediumresolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide-Field Sensor (AWiFS). These three sensors provide images of different resolutions and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to images from the Landsat-5 Thematic Mapper (TM) and Landsat-7 Enhanced TM Plus (ETM+) sensors. The approach involves calibration of surface observations based on image statistics from areas observed nearly simultaneously by the two sensors. This paper also evaluated the viability of data from these nextgeneration imagers for use in creating three National Land Cover Dataset (NLCD) products: land cover, percent tree canopy, and percent impervious surface. Individual products were consistent with previous studies but had slightly lower overall accuracies as compared to data from the Landsat sensors.
Nguyen, Dung C; Ma, Dongsheng Brian; Roveda, Janet M W
2012-01-01
As one of the key clinical imaging methods, the computed X-ray tomography can be further improved using new nanometer CMOS sensors. This will enhance the current technique's ability in terms of cancer detection size, position, and detection accuracy on the anatomical structures. The current paper reviewed designs of SOI-based CMOS sensors and their architectural design in mammography systems. Based on the existing experimental results, using the SOI technology can provide a low-noise (SNR around 87.8 db) and high-gain (30 v/v) CMOS imager. It is also expected that, together with the fast data acquisition designs, the new type of imagers may play important roles in the near-future high-dimensional images in additional to today's 2D imagers.
Forensic use of photo response non-uniformity of imaging sensors and a counter method.
Dirik, Ahmet Emir; Karaküçük, Ahmet
2014-01-13
Analogous to use of bullet scratches in forensic science, the authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, photo-response non-uniformity noise (PRNU) has been used in source camera identification (SCI). However, this technique can be used maliciously to track or inculpate innocent people. To impede such tracking, PRNU noise should be suppressed significantly. Based on this motivation, we propose a counter forensic method to deceive SCI. Experimental results show that it is possible to impede PRNU-based camera identification for various imaging sensors while preserving the image quality.
A Chip and Pixel Qualification Methodology on Imaging Sensors
NASA Technical Reports Server (NTRS)
Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank
2004-01-01
This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.
Multi-image acquisition-based distance sensor using agile laser spot beam.
Riza, Nabeel A; Amin, M Junaid
2014-09-01
We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.
NASA Astrophysics Data System (ADS)
Yang, Gongping; Zhou, Guang-Tong; Yin, Yilong; Yang, Xiukun
2010-12-01
A critical step in an automatic fingerprint recognition system is the segmentation of fingerprint images. Existing methods are usually designed to segment fingerprint images originated from a certain sensor. Thus their performances are significantly affected when dealing with fingerprints collected by different sensors. This work studies the sensor interoperability of fingerprint segmentation algorithms, which refers to the algorithm's ability to adapt to the raw fingerprints obtained from different sensors. We empirically analyze the sensor interoperability problem, and effectively address the issue by proposing a [InlineEquation not available: see fulltext.]-means based segmentation method called SKI. SKI clusters foreground and background blocks of a fingerprint image based on the [InlineEquation not available: see fulltext.]-means algorithm, where a fingerprint block is represented by a 3-dimensional feature vector consisting of block-wise coherence, mean, and variance (abbreviated as CMV). SKI also employs morphological postprocessing to achieve favorable segmentation results. We perform SKI on each fingerprint to ensure sensor interoperability. The interoperability and robustness of our method are validated by experiments performed on a number of fingerprint databases which are obtained from various sensors.
Low noise WDR ROIC for InGaAs SWIR image sensor
NASA Astrophysics Data System (ADS)
Ni, Yang
2017-11-01
Hybridized image sensors are actually the only solution for image sensing beyond the spectral response of silicon devices. By hybridization, we can combine the best sensing material and photo-detector design with high performance CMOS readout circuitry. In the infrared band, we are facing typically 2 configurations: high background situation and low background situation. The performance of high background sensors are conditioned mainly by the integration capacity in each pixel which is the case for mid-wave and long-wave infrared detectors. For low background situation, the detector's performance is mainly limited by the pixel's noise performance which is conditioned by dark signal and readout noise. In the case of reflection based imaging condition, the pixel's dynamic range is also an important parameter. This is the case for SWIR band imaging. We are particularly interested by InGaAs based SWIR image sensors.
Smartphone-based quantitative measurements on holographic sensors.
Khalili Moghaddam, Gita; Lowe, Christopher Robin
2017-01-01
The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI), i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals.
Smartphone-based quantitative measurements on holographic sensors
Khalili Moghaddam, Gita
2017-01-01
The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI), i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals. PMID:29141008
Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review
Lv, Zhuowen; Xing, Xianglei; Wang, Kejun; Guan, Donghai
2015-01-01
Gait is a unique perceptible biometric feature at larger distances, and the gait representation approach plays a key role in a video sensor-based gait recognition system. Class Energy Image is one of the most important gait representation methods based on appearance, which has received lots of attentions. In this paper, we reviewed the expressions and meanings of various Class Energy Image approaches, and analyzed the information in the Class Energy Images. Furthermore, the effectiveness and robustness of these approaches were compared on the benchmark gait databases. We outlined the research challenges and provided promising future directions for the field. To the best of our knowledge, this is the first review that focuses on Class Energy Image. It can provide a useful reference in the literature of video sensor-based gait representation approach. PMID:25574935
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.
Park, Chulhee; Kang, Moon Gi
2016-05-18
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
Park, Chulhee; Kang, Moon Gi
2016-01-01
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381
UTOFIA: an underwater time-of-flight image acquisition system
NASA Astrophysics Data System (ADS)
Driewer, Adrian; Abrosimov, Igor; Alexander, Jonathan; Benger, Marc; O'Farrell, Marion; Haugholt, Karl Henrik; Softley, Chris; Thielemann, Jens T.; Thorstensen, Jostein; Yates, Chris
2017-10-01
In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream. The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.
Toward CMOS image sensor based glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo
2012-09-07
Complementary metal oxide semiconductor (CMOS) image sensor is a powerful tool for biosensing applications. In this present study, CMOS image sensor has been exploited for detecting glucose levels by simple photon count variation with high sensitivity. Various concentrations of glucose (100 mg dL(-1) to 1000 mg dL(-1)) were added onto a simple poly-dimethylsiloxane (PDMS) chip and the oxidation of glucose was catalyzed with the aid of an enzymatic reaction. Oxidized glucose produces a brown color with the help of chromogen during enzymatic reaction and the color density varies with the glucose concentration. Photons pass through the PDMS chip with varying color density and hit the sensor surface. Photon count was recognized by CMOS image sensor depending on the color density with respect to the glucose concentration and it was converted into digital form. By correlating the obtained digital results with glucose concentration it is possible to measure a wide range of blood glucose levels with great linearity based on CMOS image sensor and therefore this technique will promote a convenient point-of-care diagnosis.
Fast range estimation based on active range-gated imaging for coastal surveillance
NASA Astrophysics Data System (ADS)
Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang
2012-11-01
Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.
NASA Astrophysics Data System (ADS)
El-Saba, A. M.; Alam, M. S.; Surpanani, A.
2006-05-01
Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.
Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin
2015-03-24
A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.
Multi-sensor image registration based on algebraic projective invariants.
Li, Bin; Wang, Wei; Ye, Hao
2013-04-22
A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.
CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.
Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V
2011-04-01
In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.
VLC-based indoor location awareness using LED light and image sensors
NASA Astrophysics Data System (ADS)
Lee, Seok-Ju; Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally providing LED-IT convergence services such as visible light communication (VLC) based location awareness and navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the height exactly. If the position error of the height is greater than the height between floors, it may cause big problem. Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the validity of the proposed VLC based new positioning system using visible LED light and image sensors.
Analysis of simulated image sequences from sensors for restricted-visibility operations
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar
1991-01-01
A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.
Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors.
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-12
Complementary metal-oxide-semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor.
Organic-on-silicon complementary metal–oxide–semiconductor colour image sensors
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-01
Complementary metal–oxide–semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor. PMID:25578322
Illumination adaptation with rapid-response color sensors
NASA Astrophysics Data System (ADS)
Zhang, Xinchi; Wang, Quan; Boyer, Kim L.
2014-09-01
Smart lighting solutions based on imaging sensors such as webcams or time-of-flight sensors suffer from rising privacy concerns. In this work, we use low-cost non-imaging color sensors to measure local luminous flux of different colors in an indoor space. These sensors have much higher data acquisition rate and are much cheaper than many o_-the-shelf commercial products. We have developed several applications with these sensors, including illumination feedback control and occupancy-driven lighting.
A generic FPGA-based detector readout and real-time image processing board
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant
2016-07-01
For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.
NASA Astrophysics Data System (ADS)
Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias
2018-04-01
This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.
Imaging optical sensor arrays.
Walt, David R
2002-10-01
Imaging optical fibres have been etched to prepare microwell arrays. These microwells have been loaded with sensing materials such as bead-based sensors and living cells to create high-density sensor arrays. The extremely small sizes and volumes of the wells enable high sensitivity and high information content sensing capabilities.
NASA Astrophysics Data System (ADS)
Chiou, Jin-Chern; Hung, Chen-Chun; Lin, Chun-Ying
2010-07-01
This work presents a MEMS-based image stabilizer applied for anti-shaking function in photographic cell phones. The proposed stabilizer is designed as a two-axis decoupling XY stage 1.4 × 1.4 × 0.1 mm3 in size, and adequately strong to suspend an image sensor for anti-shaking photographic function. This stabilizer is fabricated by complex fabrication processes, including inductively coupled plasma (ICP) processes and flip-chip bonding technique. Based on the special designs of a hollow handle layer and a corresponding wire-bonding assisted holder, electrical signals of the suspended image sensor can be successfully sent out with 32 signal springs without incurring damage during wire-bonding packaging. The longest calculated traveling distance of the stabilizer is 25 µm which is sufficient to resolve the anti-shaking problem in a three-megapixel image sensor. Accordingly, the applied voltage for the 25 µm moving distance is 38 V. Moreover, the resonant frequency of the actuating device with the image sensor is 1.123 kHz.
An information based approach to improving overhead imagery collection
NASA Astrophysics Data System (ADS)
Sourwine, Matthew J.; Hintz, Kenneth J.
2011-06-01
Recent growth in commercial imaging satellite development has resulted in a complex and diverse set of systems. To simplify this environment for both customer and vendor, an information based sensor management model was built to integrate tasking and scheduling systems. By establishing a relationship between image quality and information, tasking by NIIRS can be utilized to measure the customer's required information content. Focused on a reduction in uncertainty about a target of interest, the sensor manager finds the best sensors to complete the task given the active suite of imaging sensors' functions. This is done through determination of which satellite will meet customer information and timeliness requirements with low likelihood of interference at the highest rate of return.
Real-time biochemical sensor based on Raman scattering with CMOS contact imaging.
Muyun Cao; Yuhua Li; Yadid-Pecht, Orly
2015-08-01
This work presents a biochemical sensor based on Raman scattering with Complementary metal-oxide-semiconductor (CMOS) contact imaging. This biochemical optical sensor is designed for detecting the concentration of solutions. The system is built with a laser diode, an optical filter, a sample holder and a commercial CMOS sensor. The output of the system is analyzed by an image processing program. The system provides instant measurements with a resolution of 0.2 to 0.4 Mol. This low cost and easy-operated small scale system is useful in chemical, biomedical and environmental labs for quantitative bio-chemical concentration detection with results reported comparable to a highly cost commercial spectrometer.
An acousto-optic sensor based on resonance grating waveguide structure
Xie, Antonio Jou; Song, Fuchuan; Seo, Sang-Woo
2014-01-01
This paper presents an acousto-optic (AO) sensor based on resonance grating waveguide structure. The sensor is fabricated using elastic polymer materials to achieve a good sensitivity to ultrasound pressure waves. Ultrasound pressure waves modify the structural parameters of the sensor and result in the optical resonance shift of the sensor. This converts into a light intensity modulation. A commercial ultrasound transducer at 20 MHz is used to characterize a fabricated sensor and detection sensitivity at different optical source wavelength within a resonance spectrum is investigated. Practical use of the sensor at a fixed optical source wavelength is presented. Ultimately, the geometry of the planar sensor structure is suitable for two-dimensional, optical pressure imaging applications such as pressure wave detection and mapping, and ultrasound imaging. PMID:25045203
Experimental image alignment system
NASA Technical Reports Server (NTRS)
Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.
1980-01-01
A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.
Chander, G.; Scaramuzza, P.L.
2006-01-01
Increasingly, data from multiple sensors are used to gain a more complete understanding of land surface processes at a variety of scales. The Landsat suite of satellites has collected the longest continuous archive of multispectral data. The ResourceSat-1 Satellite (also called as IRS-P6) was launched into the polar sunsynchronous orbit on Oct 17, 2003. It carries three remote sensing sensors: the High Resolution Linear Imaging Self-Scanner (LISS-IV), Medium Resolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide Field Sensor (AWiFS). These three sensors are used together to provide images with different resolution and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to the Landsat-5 TM and Landsat-7 ETM+ sensors. The approach involved the calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors.
A review of potential image fusion methods for remote sensing-based irrigation management: Part II
USDA-ARS?s Scientific Manuscript database
Satellite-based sensors provide data at either greater spectral and coarser spatial resolutions, or lower spectral and finer spatial resolutions due to complementary spectral and spatial characteristics of optical sensor systems. In order to overcome this limitation, image fusion has been suggested ...
Swap intensified WDR CMOS module for I2/LWIR fusion
NASA Astrophysics Data System (ADS)
Ni, Yang; Noguier, Vincent
2015-05-01
The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.
Radiometric characterization of hyperspectral imagers using multispectral sensors
NASA Astrophysics Data System (ADS)
McCorkel, Joel; Thome, Kurt; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-08-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (MODIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of MODIS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most bands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Radiometric Characterization of Hyperspectral Imagers using Multispectral Sensors
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Kurt, Thome; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-01-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these test sites are not always successful due to weather and funding availability. Therefore, RSG has also automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor, This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral a imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (M0DIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of M0DlS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most brands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
A multimodal image sensor system for identifying water stress in grapevines
NASA Astrophysics Data System (ADS)
Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong
2012-11-01
Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.
Sasagawa, Kiyotaka; Shishido, Sanshiro; Ando, Keisuke; Matsuoka, Hitoshi; Noda, Toshihiko; Tokuda, Takashi; Kakiuchi, Kiyomi; Ohta, Jun
2013-05-06
In this study, we demonstrate a polarization sensitive pixel for a complementary metal-oxide-semiconductor (CMOS) image sensor based on 65-nm standard CMOS technology. Using such a deep-submicron CMOS technology, it is possible to design fine metal patterns smaller than the wavelengths of visible light by using a metal wire layer. We designed and fabricated a metal wire grid polarizer on a 20 × 20 μm(2) pixel for image sensor. An extinction ratio of 19.7 dB was observed at a wavelength 750 nm.
Design of polarization imaging system based on CIS and FPGA
NASA Astrophysics Data System (ADS)
Zeng, Yan-an; Liu, Li-gang; Yang, Kun-tao; Chang, Da-ding
2008-02-01
As polarization is an important characteristic of light, polarization image detecting is a new image detecting technology of combining polarimetric and image processing technology. Contrasting traditional image detecting in ray radiation, polarization image detecting could acquire a lot of very important information which traditional image detecting couldn't. Polarization image detecting will be widely used in civilian field and military field. As polarization image detecting could resolve some problem which couldn't be resolved by traditional image detecting, it has been researched widely around the world. The paper introduces polarization image detecting in physical theory at first, then especially introduces image collecting and polarization image process based on CIS (CMOS image sensor) and FPGA. There are two parts including hardware and software for polarization imaging system. The part of hardware include drive module of CMOS image sensor, VGA display module, SRAM access module and the real-time image data collecting system based on FPGA. The circuit diagram and PCB was designed. Stokes vector and polarization angle computing method are analyzed in the part of software. The float multiply of Stokes vector is optimized into just shift and addition operation. The result of the experiment shows that real time image collecting system could collect and display image data from CMOS image sensor in real-time.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
Camera sensor arrangement for crop/weed detection accuracy in agronomic images.
Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo
2013-04-02
In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
A novel optical gating method for laser gated imaging
NASA Astrophysics Data System (ADS)
Ginat, Ran; Schneider, Ron; Zohar, Eyal; Nesher, Ofer
2013-06-01
For the past 15 years, Elbit Systems is developing time-resolved active laser-gated imaging (LGI) systems for various applications. Traditional LGI systems are based on high sensitive gated sensors, synchronized to pulsed laser sources. Elbit propriety multi-pulse per frame method, which is being implemented in LGI systems, improves significantly the imaging quality. A significant characteristic of the LGI is its ability to penetrate a disturbing media, such as rain, haze and some fog types. Current LGI systems are based on image intensifier (II) sensors, limiting the system in spectral response, image quality, reliability and cost. A novel propriety optical gating module was developed in Elbit, untying the dependency of LGI system on II. The optical gating module is not bounded to the radiance wavelength and positioned between the system optics and the sensor. This optical gating method supports the use of conventional solid state sensors. By selecting the appropriate solid state sensor, the new LGI systems can operate at any desired wavelength. In this paper we present the new gating method characteristics, performance and its advantages over the II gating method. The use of the gated imaging systems is described in a variety of applications, including results from latest field experiments.
A 100 Mfps image sensor for biological applications
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo
2018-02-01
Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.
Room temperature infrared imaging sensors based on highly purified semiconducting carbon nanotubes.
Liu, Yang; Wei, Nan; Zhao, Qingliang; Zhang, Dehui; Wang, Sheng; Peng, Lian-Mao
2015-04-21
High performance infrared (IR) imaging systems usually require expensive cooling systems, which are highly undesirable. Here we report the fabrication and performance characteristics of room temperature carbon nanotube (CNT) IR imaging sensors. The CNT IR imaging sensor is based on aligned semiconducting CNT films with 99% purity, and each pixel or device of the imaging sensor consists of aligned strips of CNT asymmetrically contacted by Sc and Pd. We found that the performance of the device is dependent on the CNT channel length. While short channel devices provide a large photocurrent and a rapid response of about 110 μs, long channel length devices exhibit a low dark current and a high signal-to-noise ratio which are critical for obtaining high detectivity. In total, 36 CNT IR imagers are constructed on a single chip, each consists of 3 × 3 pixel arrays. The demonstrated advantages of constructing a high performance IR system using purified semiconducting CNT aligned films include, among other things, fast response, excellent stability and uniformity, ideal linear photocurrent response, high imaging polarization sensitivity and low power consumption.
Magnetic resonance imaging-compatible tactile sensing device based on a piezoelectric array.
Hamed, Abbi; Masamune, Ken; Tse, Zion Tsz Ho; Lamperth, Michael; Dohi, Takeyoshi
2012-07-01
Minimally invasive surgery is a widely used medical technique, one of the drawbacks of which is the loss of direct sense of touch during the operation. Palpation is the use of fingertips to explore and make fast assessments of tissue morphology. Although technologies are developed to equip minimally invasive surgery tools with haptic feedback capabilities, the majority focus on tissue stiffness profiling and tool-tissue interaction force measurement. For greatly increased diagnostic capability, a magnetic resonance imaging-compatible tactile sensor design is proposed, which allows minimally invasive surgery to be performed under image guidance, combining the strong capability of magnetic resonance imaging soft tissue and intuitive palpation. The sensing unit is based on a piezoelectric sensor methodology, which conforms to the stringent mechanical and electrical design requirements imposed by the magnetic resonance environment The sensor mechanical design and the device integration to a 0.2 Tesla open magnetic resonance imaging scanner are described, together with the device's magnetic resonance compatibility testing. Its design limitations and potential future improvements are also discussed. A tactile sensing unit based on a piezoelectric sensor principle is proposed, which is designed for magnetic resonance imaging guided interventions.
Wavefront detection method of a single-sensor based adaptive optics system.
Wang, Chongchong; Hu, Lifa; Xu, Huanyu; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Mu, Quanquan; Yang, Chengliang; Cao, Zhaoliang; Lu, Xinghai; Xuan, Li
2015-08-10
In adaptive optics system (AOS) for optical telescopes, the reported wavefront sensing strategy consists of two parts: a specific sensor for tip-tilt (TT) detection and another wavefront sensor for other distortions detection. Thus, a part of incident light has to be used for TT detection, which decreases the light energy used by wavefront sensor and eventually reduces the precision of wavefront correction. In this paper, a single Shack-Hartmann wavefront sensor based wavefront measurement method is presented for both large amplitude TT and other distortions' measurement. Experiments were performed for testing the presented wavefront method and validating the wavefront detection and correction ability of the single-sensor based AOS. With adaptive correction, the root-mean-square of residual TT was less than 0.2 λ, and a clear image was obtained in the lab. Equipped on a 1.23-meter optical telescope, the binary stars with angle distance of 0.6″ were clearly resolved using the AOS. This wavefront measurement method removes the separate TT sensor, which not only simplifies the AOS but also saves light energy for subsequent wavefront sensing and imaging, and eventually improves the detection and imaging capability of the AOS.
Circuit design for the retina-like image sensor based on space-variant lens array
NASA Astrophysics Data System (ADS)
Gao, Hongxun; Hao, Qun; Jin, Xuefeng; Cao, Jie; Liu, Yue; Song, Yong; Fan, Fan
2013-12-01
Retina-like image sensor is based on the non-uniformity of the human eyes and the log-polar coordinate theory. It has advantages of high-quality data compression and redundant information elimination. However, retina-like image sensors based on the CMOS craft have drawbacks such as high cost, low sensitivity and signal outputting efficiency and updating inconvenience. Therefore, this paper proposes a retina-like image sensor based on space-variant lens array, focusing on the circuit design to provide circuit support to the whole system. The circuit includes the following parts: (1) A photo-detector array with a lens array to convert optical signals to electrical signals; (2) a strobe circuit for time-gating of the pixels and parallel paths for high-speed transmission of the data; (3) a high-precision digital potentiometer for the I-V conversion, ratio normalization and sensitivity adjustment, a programmable gain amplifier for automatic generation control(AGC), and a A/D converter for the A/D conversion in every path; (4) the digital data is displayed on LCD and stored temporarily in DDR2 SDRAM; (5) a USB port to transfer the data to PC; (6) the whole system is controlled by FPGA. This circuit has advantages as lower cost, larger pixels, updating convenience and higher signal outputting efficiency. Experiments have proved that the grayscale output of every pixel basically matches the target and a non-uniform image of the target is ideally achieved in real time. The circuit can provide adequate technical support to retina-like image sensors based on space-variant lens array.
Detection of sudden death syndrome using a multispectral imaging sensor
USDA-ARS?s Scientific Manuscript database
Sudden death syndrome (SDS), caused by the fungus Fusarium solani f. sp. glycines, is a widespread mid- to late-season disease with distinctive foliar symptoms. This paper reported the development of an image analysis based method to detect SDS using a multispectral image sensor. A hue, saturation a...
Tokuda, T; Yamada, H; Sasagawa, K; Ohta, J
2009-10-01
This paper proposes and demonstrates a polarization-analyzing CMOS sensor based on image sensor architecture. The sensor was designed targeting applications for chiral analysis in a microchemistry system. The sensor features a monolithically embedded polarizer. Embedded polarizers with different angles were implemented to realize a real-time absolute measurement of the incident polarization angle. Although the pixel-level performance was confirmed to be limited, estimation schemes based on the variation of the polarizer angle provided a promising performance for real-time polarization measurements. An estimation scheme using 180 pixels in a 1deg step provided an estimation accuracy of 0.04deg. Polarimetric measurements of chiral solutions were also successfully performed to demonstrate the applicability of the sensor to optical chiral analysis.
2015-11-01
National Guard PLR Division of Polar Programs SMM /I Special Sensor Microwave/Imager SMMR Scanning Multi-channel Microwave Radiometer ERDC/CRREL...and the Special Sensor Microwave/Imager ( SMM /I). The satellite-based technique uses a difference in the passive microwave brightness temperatures
Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types
NASA Astrophysics Data System (ADS)
Gehrke, S.; Beshah, B. T.
2016-06-01
Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.
High-Sensitivity Fiber-Optic Ultrasound Sensors for Medical Imaging Applications
Wen, H.; Wiesler, D.G.; Tveten, A.; Danver, B.; Dandridge, A.
2010-01-01
This paper presents several designs of high-sensitivity, compact fiber-optic ultrasound sensors that may be used for medical imaging applications. These sensors translate ultrasonic pulses into strains in single-mode optical fibers, which are measured with fiber-based laser interferometers at high precision. The sensors are simpler and less expensive to make than piezoelectric sensors, and are not susceptible to electromagnetic interference. It is possible to make focal sensors with these designs, and several schemes are discussed. Because of the minimum bending radius of optical fibers, the designs are suitable for single element sensors rather than for arrays. PMID:9691368
Webcam classification using simple features
NASA Astrophysics Data System (ADS)
Pramoun, Thitiporn; Choe, Jeehyun; Li, He; Chen, Qingshuang; Amornraksa, Thumrongrat; Lu, Yung-Hsiang; Delp, Edward J.
2015-03-01
Thousands of sensors are connected to the Internet and many of these sensors are cameras. The "Internet of Things" will contain many "things" that are image sensors. This vast network of distributed cameras (i.e. web cams) will continue to exponentially grow. In this paper we examine simple methods to classify an image from a web cam as "indoor/outdoor" and having "people/no people" based on simple features. We use four types of image features to classify an image as indoor/outdoor: color, edge, line, and text. To classify an image as having people/no people we use HOG and texture features. The features are weighted based on their significance and combined. A support vector machine is used for classification. Our system with feature weighting and feature combination yields 95.5% accuracy.
Biometric image enhancement using decision rule based image fusion techniques
NASA Astrophysics Data System (ADS)
Sagayee, G. Mary Amirtha; Arumugam, S.
2010-02-01
Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.
Duan, Yuping; Bouslimi, Dalel; Yang, Guanyu; Shu, Huazhong; Coatrieux, Gouenou
2017-07-01
In this paper, we focus on the "blind" identification of the computed tomography (CT) scanner that has produced a CT image. To do so, we propose a set of noise features derived from the image chain acquisition and which can be used as CT-scanner footprint. Basically, we propose two approaches. The first one aims at identifying a CT scanner based on an original sensor pattern noise (OSPN) that is intrinsic to the X-ray detectors. The second one identifies an acquisition system based on the way this noise is modified by its three-dimensional (3-D) image reconstruction algorithm. As these reconstruction algorithms are manufacturer dependent and kept secret, our features are used as input to train a support vector machine (SVM) based classifier to discriminate acquisition systems. Experiments conducted on images issued from 15 different CT-scanner models of 4 distinct manufacturers demonstrate that our system identifies the origin of one CT image with a detection rate of at least 94% and that it achieves better performance than sensor pattern noise (SPN) based strategy proposed for general public camera devices.
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
Scene-based Shack-Hartmann wavefront sensor for light-sheet microscopy
NASA Astrophysics Data System (ADS)
Lawrence, Keelan; Liu, Yang; Dale, Savannah; Ball, Rebecca; VanLeuven, Ariel J.; Sornborger, Andrew; Lauderdale, James D.; Kner, Peter
2018-02-01
Light-sheet microscopy is an ideal imaging modality for long-term live imaging in model organisms. However, significant optical aberrations can be present when imaging into an organism that is hundreds of microns or greater in size. To measure and correct optical aberrations, an adaptive optics system must be incorporated into the microscope. Many biological samples lack point sources that can be used as guide stars with conventional Shack-Hartmann wavefront sensors. We have developed a scene-based Shack-Hartmann wavefront sensor for measuring the optical aberrations in a light-sheet microscopy system that does not require a point-source and can measure the aberrations for different parts of the image. The sensor has 280 lenslets inside the pupil, creates an image from each lenslet with a 500 micron field of view and a resolution of 8 microns, and has a resolution for the wavefront gradient of 75 milliradians per lenslet. We demonstrate the system on both fluorescent bead samples and zebrafish embryos.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
Fiber-Laser-Based Ultrasound Sensor for Photoacoustic Imaging
Liang, Yizhi; Jin, Long; Wang, Lidai; Bai, Xue; Cheng, Linghao; Guan, Bai-Ou
2017-01-01
Photoacoustic imaging, especially for intravascular and endoscopic applications, requires ultrasound probes with miniature size and high sensitivity. In this paper, we present a new photoacoustic sensor based on a small-sized fiber laser. Incident ultrasound waves exert pressures on the optical fiber laser and induce harmonic vibrations of the fiber, which is detected by the frequency shift of the beating signal between the two orthogonal polarization modes in the fiber laser. This ultrasound sensor presents a noise-equivalent pressure of 40 Pa over a 50-MHz bandwidth. We demonstrate this new ultrasound sensor on an optical-resolution photoacoustic microscope. The axial and lateral resolutions are 48 μm and 3.3 μm. The field of view is up to 1.57 mm2. The sensor exhibits strong resistance to environmental perturbations, such as temperature changes, due to common-mode cancellation between the two orthogonal modes. The present fiber laser ultrasound sensor offers a new tool for all-optical photoacoustic imaging. PMID:28098201
Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.
Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei
2017-09-22
The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
Structured Light-Based Hazard Detection For Planetary Surface Navigation
NASA Technical Reports Server (NTRS)
Nefian, Ara; Wong, Uland Y.; Dille, Michael; Bouyssounouse, Xavier; Edwards, Laurence; To, Vinh; Deans, Matthew; Fong, Terry
2017-01-01
This paper describes a structured light-based sensor for hazard avoidance in planetary environments. The system presented here can also be used in terrestrial applications constrained by reduced onboard power and computational complexity and low illumination conditions. The sensor is on a calibrated camera and laser dot projector system. The onboard hazard avoidance system determines the position of the projected dots in the image and through a triangulation process detects potential hazards. The paper presents the design parameters for this sensor and describes the image based solution for hazard avoidance. The system presented here was tested extensively in day and night conditions in Lunar analogue environments. The current system achieves over 97 detection rate with 1.7 false alarms over 2000 images.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
Real-time DNA Amplification and Detection System Based on a CMOS Image Sensor.
Wang, Tiantian; Devadhasan, Jasmine Pramila; Lee, Do Young; Kim, Sanghyo
2016-01-01
In the present study, we developed a polypropylene well-integrated complementary metal oxide semiconductor (CMOS) platform to perform the loop mediated isothermal amplification (LAMP) technique for real-time DNA amplification and detection simultaneously. An amplification-coupled detection system directly measures the photon number changes based on the generation of magnesium pyrophosphate and color changes. The photon number decreases during the amplification process. The CMOS image sensor observes the photons and converts into digital units with the aid of an analog-to-digital converter (ADC). In addition, UV-spectral studies, optical color intensity detection, pH analysis, and electrophoresis detection were carried out to prove the efficiency of the CMOS sensor based the LAMP system. Moreover, Clostridium perfringens was utilized as proof-of-concept detection for the new system. We anticipate that this CMOS image sensor-based LAMP method will enable the creation of cost-effective, label-free, optical, real-time and portable molecular diagnostic devices.
A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.
NASA Astrophysics Data System (ADS)
Linares, Rodrigo; Vergara, German; Gutiérrez, Raúl; Fernández, Carlos; Villamayor, Víctor; Gómez, Luis; González-Camino, Maria; Baldasano, Arturo; Castro, G.; Arias, R.; Lapido, Y.; Rodríguez, J.; Romero, Pablo
2015-05-01
The combination of flexibility, productivity, precision and zero-defect manufacturing in future laser-based equipment are a major challenge that faces this enabling technology. New sensors for online monitoring and real-time control of laserbased processes are necessary for improving products quality and increasing manufacture yields. New approaches to fully automate processes towards zero-defect manufacturing demand smarter heads where lasers, optics, actuators, sensors and electronics will be integrated in a unique compact and affordable device. Many defects arising in laser-based manufacturing processes come from instabilities in the dynamics of the laser process. Temperature and heat dynamics are key parameters to be monitored. Low cost infrared imagers with high-speed of response will constitute the next generation of sensors to be implemented in future monitoring and control systems for laser-based processes, capable to provide simultaneous information about heat dynamics and spatial distribution. This work describes the result of using an innovative low-cost high-speed infrared imager based on the first quantum infrared imager monolithically integrated with Si-CMOS ROIC of the market. The sensor is able to provide low resolution images at frame rates up to 10 KHz in uncooled operation at the same cost as traditional infrared spot detectors. In order to demonstrate the capabilities of the new sensor technology, a low-cost camera was assembled on a standard production laser welding head, allowing to register melting pool images at frame rates of 10 kHz. In addition, a specific software was developed for defect detection and classification. Multiple laser welding processes were recorded with the aim to study the performance of the system and its application to the real-time monitoring of laser welding processes. During the experiments, different types of defects were produced and monitored. The classifier was fed with the experimental images obtained. Self-learning strategies were implemented with very promising results, demonstrating the feasibility of using low-cost high-speed infrared imagers in advancing towards a real-time / in-line zero-defect production systems.
Holographic leaky-wave metasurfaces for dual-sensor imaging.
Li, Yun Bo; Li, Lian Lin; Cai, Ben Geng; Cheng, Qiang; Cui, Tie Jun
2015-12-10
Metasurfaces have huge potentials to develop new type imaging systems due to their abilities of controlling electromagnetic waves. Here, we propose a new method for dual-sensor imaging based on cross-like holographic leaky-wave metasurfaces which are composed of hybrid isotropic and anisotropic surface impedance textures. The holographic leaky-wave radiations are generated by special impedance modulations of surface waves excited by the sensor ports. For one independent sensor, the main leaky-wave radiation beam can be scanned by frequency in one-dimensional space, while the frequency scanning in the orthogonal spatial dimension is accomplished by the other sensor. Thus, for a probed object, the imaging plane can be illuminated adequately to obtain the two-dimensional backward scattered fields by the dual-sensor for reconstructing the object. The relativity of beams under different frequencies is very low due to the frequency-scanning beam performance rather than the random beam radiations operated by frequency, and the multi-illuminations with low relativity are very appropriate for multi-mode imaging method with high resolution and anti- noise. Good reconstruction results are given to validate the proposed imaging method.
Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available. PMID:23112602
Multi-sensor fusion of infrared and electro-optic signals for high resolution night images.
Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor
2012-01-01
Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.
EOID Evaluation and Automated Target Recognition
2002-09-30
Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects (MLOs) that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist
EOID Evaluation and Automated Target Recognition
2001-09-30
Electro - Optic IDentification (EOID) sensors into shallow water littoral zone minehunting systems on towed, remotely operated, and autonomous platforms. These downlooking laser-based sensors operate at unparalleled standoff ranges in visible wavelengths to image and identify mine-like objects that have been detected through other sensing means such as magnetic induction and various modes of acoustic imaging. Our long term goal is to provide a robust automated target cueing and identification capability for use with these imaging sensors. It is also our goal to assist the
Geiger-Mode Avalanche Photodiode Arrays Integrated to All-Digital CMOS Circuits
2016-01-20
Figure 7 4×4 GMAPD array wire bonded to CMOS timing circuits Figure 8 Low‐fill‐factor APD design used in lidar sensors The APD doping...epitaxial growth and the pixels are isolated by mesa etch. 128×32 lidar image sensors were built by bump bonding the APD arrays to a CMOS timing...passive image sensor with this large a format based on hybridization of a GMAPD array to a CMOS readout. Fig. 14 shows one of the first images taken
Arrays of Nano Tunnel Junctions as Infrared Image Sensors
NASA Technical Reports Server (NTRS)
Son, Kyung-Ah; Moon, Jeong S.; Prokopuk, Nicholas
2006-01-01
Infrared image sensors based on high density rectangular planar arrays of nano tunnel junctions have been proposed. These sensors would differ fundamentally from prior infrared sensors based, variously, on bolometry or conventional semiconductor photodetection. Infrared image sensors based on conventional semiconductor photodetection must typically be cooled to cryogenic temperatures to reduce noise to acceptably low levels. Some bolometer-type infrared sensors can be operated at room temperature, but they exhibit low detectivities and long response times, which limit their utility. The proposed infrared image sensors could be operated at room temperature without incurring excessive noise, and would exhibit high detectivities and short response times. Other advantages would include low power demand, high resolution, and tailorability of spectral response. Neither bolometers nor conventional semiconductor photodetectors, the basic detector units as proposed would partly resemble rectennas. Nanometer-scale tunnel junctions would be created by crossing of nanowires with quantum-mechanical-barrier layers in the form of thin layers of electrically insulating material between them (see figure). A microscopic dipole antenna sized and shaped to respond maximally in the infrared wavelength range that one seeks to detect would be formed integrally with the nanowires at each junction. An incident signal in that wavelength range would become coupled into the antenna and, through the antenna, to the junction. At the junction, the flow of electrons between the crossing wires would be dominated by quantum-mechanical tunneling rather than thermionic emission. Relative to thermionic emission, quantum mechanical tunneling is a fast process.
Image processing occupancy sensor
Brackney, Larry J.
2016-09-27
A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.
Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing
Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge
2011-01-01
This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739
A CMOS high speed imaging system design based on FPGA
NASA Astrophysics Data System (ADS)
Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui
2015-10-01
CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.
Active-Pixel Image Sensor With Analog-To-Digital Converters
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.
1995-01-01
Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.
Gradient-based interpolation method for division-of-focal-plane polarimeters.
Gao, Shengkui; Gruev, Viktor
2013-01-14
Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.
Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
DNA as Sensors and Imaging Agents for Metal Ions
Xiang, Yu
2014-01-01
Increasing interests in detecting metal ions in many chemical and biomedical fields have created demands for developing sensors and imaging agents for metal ions with high sensitivity and selectivity. This review covers recent progress in DNA-based sensors and imaging agents for metal ions. Through both combinatorial selection and rational design, a number of metal ion-dependent DNAzymes and metal ion-binding DNA structures that can selectively recognize specific metal ions have been obtained. By attaching these DNA molecules with signal reporters such as fluorophores, chromophores, electrochemical tags, and Raman tags, a number of DNA-based sensors for both diamagnetic and paramagnetic metal ions have been developed for fluorescent, colorimetric, electrochemical, and surface Raman detections. These sensors are highly sensitive (with detection limit down to 11 ppt) and selective (with selectivity up to millions-fold) toward specific metal ions. In addition, through further development to simplify the operation, such as the use of “dipstick tests”, portable fluorometers, computer-readable discs, and widely available glucose meters, these sensors have been applied for on-site and real-time environmental monitoring and point-of-care medical diagnostics. The use of these sensors for in situ cellular imaging has also been reported. The generality of the combinatorial selection to obtain DNAzymes for almost any metal ion in any oxidation state, and the ease of modification of the DNA with different signal reporters make DNA an emerging and promising class of molecules for metal ion sensing and imaging in many fields of applications. PMID:24359450
Multisensor data fusion across time and space
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.
2014-06-01
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.
Novel compact panomorph lens based vision system for monitoring around a vehicle
NASA Astrophysics Data System (ADS)
Thibault, Simon
2008-04-01
Automotive applications are one of the largest vision-sensor market segments and one of the fastest growing ones. The trend to use increasingly more sensors in cars is driven both by legislation and consumer demands for higher safety and better driving experiences. Awareness of what directly surrounds a vehicle affects safe driving and manoeuvring of a vehicle. Consequently, panoramic 360° Field of View imaging can contributes most to the perception of the world around the driver than any other sensors. However, to obtain a complete vision around the car, several sensor systems are necessary. To solve this issue, a customized imaging system based on a panomorph lens will provide the maximum information for the drivers with a reduced number of sensors. A panomorph lens is a hemispheric wide angle anamorphic lens with enhanced resolution in predefined zone of interest. Because panomorph lenses are optimized to a custom angle-to-pixel relationship, vision systems provide ideal image coverage that reduces and optimizes the processing. We present various scenarios which may benefit from the use of a custom panoramic sensor. We also discuss the technical requirements of such vision system. Finally we demonstrate how the panomorph based visual sensor is probably one of the most promising ways to fuse many sensors in one. For example, a single panoramic sensor on the front of a vehicle could provide all necessary information for assistance in crash avoidance, lane tracking, early warning, park aids, road sign detection, and various video monitoring views.
Role of Imaging Specrometer Data for Model-based Cross-calibration of Imaging Sensors
NASA Technical Reports Server (NTRS)
Thome, Kurtis John
2014-01-01
Site characterization benefits from imaging spectrometry to determine spectral bi-directional reflectance of a well-understood surface. Cross calibration approaches, uncertainties, role of imaging spectrometry, model-based site characterization, and application to product validation.
A Sensitive Measurement for Estimating Impressions of Image-Contents
NASA Astrophysics Data System (ADS)
Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao
We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.
A complete passive blind image copy-move forensics scheme based on compound statistics features.
Peng, Fei; Nie, Yun-ying; Long, Min
2011-10-10
Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1999-01-01
Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.
Zhang, Wenlu; Chen, Fengyi; Ma, Wenwen; Rong, Qiangzhou; Qiao, Xueguang; Wang, Ruohui
2018-04-16
A fringe visibility enhanced fiber-optic Fabry-Perot interferometer based ultrasonic sensor is proposed and experimentally demonstrated for seismic physical model imaging. The sensor consists of a graded index multimode fiber collimator and a PTFE (polytetrafluoroethylene) diaphragm to form a Fabry-Perot interferometer. Owing to the increase of the sensor's spectral sideband slope and the smaller Young's modulus of the PTFE diaphragm, a high response to both continuous and pulsed ultrasound with a high SNR of 42.92 dB in 300 kHz is achieved when the spectral sideband filter technique is used to interrogate the sensor. The ultrasonic reconstructed images can clearly differentiate the shape of models with a high resolution.
Autonomous chemical and biological miniature wireless-sensor
NASA Astrophysics Data System (ADS)
Goldberg, Bar-Giora
2005-05-01
The presentation discusses a new concept and a paradigm shift in biological, chemical and explosive sensor system design and deployment. From large, heavy, centralized and expensive systems to distributed wireless sensor networks utilizing miniature platforms (nodes) that are lightweight, low cost and wirelessly connected. These new systems are possible due to the emergence and convergence of new innovative radio, imaging, networking and sensor technologies. Miniature integrated radio-sensor networks, is a technology whose time has come. These network systems are based on large numbers of distributed low cost and short-range wireless platforms that sense and process their environment and communicate data thru a network to a command center. The recent emergence of chemical and explosive sensor technology based on silicon nanostructures, coupled with the fast evolution of low-cost CMOS imagers, low power DSP engines and integrated radio chips, has created an opportunity to realize the vision of autonomous wireless networks. These threat detection networks will perform sophisticated analysis at the sensor node and convey alarm information up the command chain. Sensor networks of this type are expected to revolutionize the ability to detect and locate biological, chemical, or explosive threats. The ability to distribute large numbers of low-cost sensors over large areas enables these devices to be close to the targeted threats and therefore improve detection efficiencies and enable rapid counter responses. These sensor networks will be used for homeland security, shipping container monitoring, and other applications such as laboratory medical analysis, drug discovery, automotive, environmental and/or in-vivo monitoring. Avaak"s system concept is to image a chromatic biological, chemical and/or explosive sensor utilizing a digital imager, analyze the images and distribute alarm or image data wirelessly through the network. All the imaging, processing and communications would take place within the miniature, low cost distributed sensor platforms. This concept however presents a significant challenge due to a combination and convergence of required new technologies, as mentioned above. Passive biological and chemical sensors with very high sensitivity and which require no assaying are in development using a technique to optically and chemically encode silicon wafers with tailored nanostructures. The silicon wafer is patterned with nano-structures designed to change colors ad patterns when exposed to the target analytes (TICs, TIMs, VOC). A small video camera detects the color and pattern changes on the sensor. To determine if an alarm condition is present, an on board DSP processor, using specialized image processing algorithms and statistical analysis, determines if color gradient changes occurred on the sensor array. These sensors can detect several agents simultaneously. This system is currently under development by Avaak, with funding from DARPA through an SBIR grant.
On-road anomaly detection by multimodal sensor analysis and multimedia processing
NASA Astrophysics Data System (ADS)
Orhan, Fatih; Eren, P. E.
2014-03-01
The use of smartphones in Intelligent Transportation Systems is gaining popularity, yet many challenges exist in developing functional applications. Due to the dynamic nature of transportation, vehicular social applications face complexities such as developing robust sensor management, performing signal and image processing tasks, and sharing information among users. This study utilizes a multimodal sensor analysis framework which enables the analysis of sensors in multimodal aspect. It also provides plugin-based analyzing interfaces to develop sensor and image processing based applications, and connects its users via a centralized application as well as to social networks to facilitate communication and socialization. With the usage of this framework, an on-road anomaly detector is being developed and tested. The detector utilizes the sensors of a mobile device and is able to identify anomalies such as hard brake, pothole crossing, and speed bump crossing. Upon such detection, the video portion containing the anomaly is automatically extracted in order to enable further image processing analysis. The detection results are shared on a central portal application for online traffic condition monitoring.
System and method for optical fiber based image acquisition suitable for use in turbine engines
Baleine, Erwan; A V, Varun; Zombo, Paul J.; Varghese, Zubin
2017-05-16
A system and a method for image acquisition suitable for use in a turbine engine are disclosed. Light received from a field of view in an object plane is projected onto an image plane through an optical modulation device and is transferred through an image conduit to a sensor array. The sensor array generates a set of sampled image signals in a sensing basis based on light received from the image conduit. Finally, the sampled image signals are transformed from the sensing basis to a representation basis and a set of estimated image signals are generated therefrom. The estimated image signals are used for reconstructing an image and/or a motion-video of a region of interest within a turbine engine.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Phase aided 3D imaging and modeling: dedicated systems and case studies
NASA Astrophysics Data System (ADS)
Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang
2014-05-01
Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.
Sensor-based architecture for medical imaging workflow analysis.
Silva, Luís A Bastião; Campos, Samuel; Costa, Carlos; Oliveira, José Luis
2014-08-01
The growing use of computer systems in medical institutions has been generating a tremendous quantity of data. While these data have a critical role in assisting physicians in the clinical practice, the information that can be extracted goes far beyond this utilization. This article proposes a platform capable of assembling multiple data sources within a medical imaging laboratory, through a network of intelligent sensors. The proposed integration framework follows a SOA hybrid architecture based on an information sensor network, capable of collecting information from several sources in medical imaging laboratories. Currently, the system supports three types of sensors: DICOM repository meta-data, network workflows and examination reports. Each sensor is responsible for converting unstructured information from data sources into a common format that will then be semantically indexed in the framework engine. The platform was deployed in the Cardiology department of a central hospital, allowing identification of processes' characteristics and users' behaviours that were unknown before the utilization of this solution.
Chander, G.; Angal, A.; Choi, T.; Meyer, D.J.; Xiong, X.; Teillet, P.M.
2007-01-01
A cross-calibration methodology has been developed using coincident image pairs from the Terra Moderate Resolution Imaging Spectroradiometer (MODIS), the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Earth Observing EO-1 Advanced Land Imager (ALI) to verify the absolute radiometric calibration accuracy of these sensors with respect to each other. To quantify the effects due to different spectral responses, the Relative Spectral Responses (RSR) of these sensors were studied and compared by developing a set of "figures-of-merit." Seven cloud-free scenes collected over the Railroad Valley Playa, Nevada (RVPN), test site were used to conduct the cross-calibration study. This cross-calibration approach was based on image statistics from near-simultaneous observations made by different satellite sensors. Homogeneous regions of interest (ROI) were selected in the image pairs, and the mean target statistics were converted to absolute units of at-sensor reflectance. Using these reflectances, a set of cross-calibration equations were developed giving a relative gain and bias between the sensor pair.
Design and fabrication of vertically-integrated CMOS image sensors.
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.
Design and Fabrication of Vertically-Integrated CMOS Image Sensors
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860
On-orbit characterization of hyperspectral imagers
NASA Astrophysics Data System (ADS)
McCorkel, Joel
Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne- and satellite-based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This dissertation presents a method for determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on a multispectral sensor, Moderate-resolution Imaging Spectroradiometer (MODIS), as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. A method to predict hyperspectral surface reflectance using a combination of MODIS data and spectral shape information is developed and applied for the characterization of Hyperion. Spectral shape information is based on RSG's historical in situ data for the Railroad Valley test site and spectral library data for the Libyan test site. Average atmospheric parameters, also based on historical measurements, are used in reflectance prediction and transfer to space. Results of several cross-calibration scenarios that differ in image acquisition coincidence, test site, and reference sensor are found for the characterization of Hyperion. These are compared with results from the reflectance-based approach of vicarious calibration, a well-documented method developed by the RSG that serves as a baseline for calibration performance for the cross-calibration method developed here. Cross-calibration provides results that are within 2% of those of reflectance-based results in most spectral regions. Larger disagreements exist for shorter wavelengths studied in this work as well as in spectral areas that experience absorption by the atmosphere.
Real-time digital signal processing for live electro-optic imaging.
Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro
2009-08-31
We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.
Coded aperture detector: an image sensor with sub 20-nm pixel resolution.
Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick
2014-08-11
We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.
Protection performance evaluation regarding imaging sensors hardened against laser dazzling
NASA Astrophysics Data System (ADS)
Ritt, Gunnar; Koerber, Michael; Forster, Daniel; Eberle, Bernd
2015-05-01
Electro-optical imaging sensors are widely distributed and used for many different purposes, including civil security and military operations. However, laser irradiation can easily disturb their operational capability. Thus, an adequate protection mechanism for electro-optical sensors against dazzling and damaging is highly desirable. Different protection technologies exist now, but none of them satisfies the operational requirements without any constraints. In order to evaluate the performance of various laser protection measures, we present two different approaches based on triangle orientation discrimination on the one hand and structural similarity on the other hand. For both approaches, image analysis algorithms are applied to images taken of a standard test scene with triangular test patterns which is superimposed by dazzling laser light of various irradiance levels. The evaluation methods are applied to three different sensors: a standard complementary metal oxide semiconductor camera, a high dynamic range camera with a nonlinear response curve, and a sensor hardened against laser dazzling.
Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach
NASA Astrophysics Data System (ADS)
Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai
2006-01-01
With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.
CMOS image sensor for detection of interferon gamma protein interaction as a point-of-care approach.
Marimuthu, Mohana; Kandasamy, Karthikeyan; Ahn, Chang Geun; Sung, Gun Yong; Kim, Min-Gon; Kim, Sanghyo
2011-09-01
Complementary metal oxide semiconductor (CMOS)-based image sensors have received increased attention owing to the possibility of incorporating them into portable diagnostic devices. The present research examined the efficiency and sensitivity of a CMOS image sensor for the detection of antigen-antibody interactions involving interferon gamma protein without the aid of expensive instruments. The highest detection sensitivity of about 1 fg/ml primary antibody was achieved simply by a transmission mechanism. When photons are prevented from hitting the sensor surface, a reduction in digital output occurs in which the number of photons hitting the sensor surface is approximately proportional to the digital number. Nanoscale variation in substrate thickness after protein binding can be detected with high sensitivity by the CMOS image sensor. Therefore, this technique can be easily applied to smartphones or any clinical diagnostic devices for the detection of several biological entities, with high impact on the development of point-of-care applications.
A Sensor System for Detection of Hull Surface Defects
Navarro, Pedro; Iborra, Andrés; Fernández, Carlos; Sánchez, Pedro; Suardíaz, Juan
2010-01-01
This paper presents a sensor system for detecting defects in ship hull surfaces. The sensor was developed to enable a robotic system to perform grit blasting operations on ship hulls. To achieve this, the proposed sensor system captures images with the help of a camera and processes them in real time using a new defect detection method based on thresholding techniques. What makes this method different is its efficiency in the automatic detection of defects from images recorded in variable lighting conditions. The sensor system was tested under real conditions at a Spanish shipyard, with excellent results. PMID:22163590
CMOS image sensors as an efficient platform for glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo; Choi, Cheol Soo
2013-10-07
Complementary metal oxide semiconductor (CMOS) image sensors have been used previously in the analysis of biological samples. In the present study, a CMOS image sensor was used to monitor the concentration of oxidized mouse plasma glucose (86-322 mg dL(-1)) based on photon count variation. Measurement of the concentration of oxidized glucose was dependent on changes in color intensity; color intensity increased with increasing glucose concentration. The high color density of glucose highly prevented photons from passing through the polydimethylsiloxane (PDMS) chip, which suggests that the photon count was altered by color intensity. Photons were detected by a photodiode in the CMOS image sensor and converted to digital numbers by an analog to digital converter (ADC). Additionally, UV-spectral analysis and time-dependent photon analysis proved the efficiency of the detection system. This simple, effective, and consistent method for glucose measurement shows that CMOS image sensors are efficient devices for monitoring glucose in point-of-care applications.
Full-field acoustomammography using an acousto-optic sensor.
Sandhu, J S; Schmidt, R A; La Rivière, P J
2009-06-01
In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed.
Full-field acoustomammography using an acousto-optic sensor
Sandhu, J. S.; Schmidt, R. A.; La Rivière, P. J.
2009-01-01
In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed. PMID:19610321
Electron-bombarded CCD detectors for ultraviolet atmospheric remote sensing
NASA Technical Reports Server (NTRS)
Carruthers, G. R.; Opal, C. B.
1983-01-01
Electronic image sensors based on charge coupled devices operated in electron-bombarded mode, yielding real-time, remote-readout, photon-limited UV imaging capability are being developed. The sensors also incorporate fast-focal-ratio Schmidt optics and opaque photocathodes, giving nearly the ultimate possible diffuse-source sensitivity. They can be used for direct imagery of atmospheric emission phenomena, and for imaging spectrography with moderate spatial and spectral resolution. The current state of instrument development, laboratory results, planned future developments and proposed applications of the sensors in space flight instrumentation is described.
Using the Optical Mouse Sensor as a Two-Euro Counterfeit Coin Detector
Tresanchez, Marcel; Pallejà, Tomàs; Teixidó, Mercè; Palacín, Jordi
2009-01-01
In this paper, the sensor of an optical mouse is presented as a counterfeit coin detector applied to the two-Euro case. The detection process is based on the short distance image acquisition capabilities of the optical mouse sensor where partial images of the coin under analysis are compared with some partial reference coin images for matching. Results show that, using only the vision sense, the counterfeit acceptance and rejection rates are very similar to those of a trained user and better than those of an untrained user. PMID:22399987
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
Multi-Sensor Registration of Earth Remotely Sensed Imagery
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)
2001-01-01
Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).
NASA Astrophysics Data System (ADS)
Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.
2014-08-01
The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.
NASA Astrophysics Data System (ADS)
Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun
2014-09-01
Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.
Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.
Kim, Dong Sik
2016-08-01
The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
Seamless positioning and navigation by using geo-referenced images and multi-sensor data.
Li, Xun; Wang, Jinling; Li, Tao
2013-07-12
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.
Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data
Li, Xun; Wang, Jinling; Li, Tao
2013-01-01
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments. PMID:23857267
Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics.
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao
2016-11-25
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known "extended DoF" (EDoF) technique, or "wavefront coding," by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.
Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface.
Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun
2016-06-01
Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging.
Smart CMOS image sensor for lightning detection and imaging.
Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor
2013-03-01
We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Effects of Optical Artifacts in a Laser-Based Spacecraft Navigation Sensor
NASA Technical Reports Server (NTRS)
LeCroy, Jerry E.; Howard, Richard T.; Hallmark, Dean S.
2007-01-01
Testing of the Advanced Video Guidance Sensor (AVGS) used for proximity operations navigation on the Orbital Express ASTRO spacecraft exposed several unanticipated imaging system artifacts and aberrations that required correction to meet critical navigation performance requirements. Mitigation actions are described for a number of system error sources, including lens aberration, optical train misalignment, laser speckle, target image defects, and detector nonlinearity/noise characteristics. Sensor test requirements and protocols are described, along with a summary of test results from sensor confidence tests and system performance testing.
Effects of Optical Artifacts in a Laser-Based Spacecraft Navigation Sensor
NASA Technical Reports Server (NTRS)
LeCroy, Jerry E.; Hallmark, Dean S.; Howard, Richard T.
2007-01-01
Testing Of the Advanced Video Guidance Sensor (AVGS) used for proximity operations navigation on the Orbital Express ASTRO spacecraft exposed several unanticipated imaging system artifacts and aberrations that required correction, to meet critical navigation performance requirements. Mitigation actions are described for a number of system error sources, including lens aberration, optical train misalignment, laser speckle, target image defects, and detector nonlinearity/noise characteristics. Sensor test requirements and protocols are described, along with a summary ,of test results from sensor confidence tests and system performance testing.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
A Quantitative Evaluation of Drive Pattern Selection for Optimizing EIT-Based Stretchable Sensors
Nefti-Meziani, Samia; Carbonaro, Nicola
2017-01-01
Electrical Impedance Tomography (EIT) is a medical imaging technique that has been recently used to realize stretchable pressure sensors. In this method, voltage measurements are taken at electrodes placed at the boundary of the sensor and are used to reconstruct an image of the applied touch pressure points. The drawback with EIT-based sensors, however, is their low spatial resolution due to the ill-posed nature of the EIT reconstruction. In this paper, we show our performance evaluation of different EIT drive patterns, specifically strategies for electrode selection when performing current injection and voltage measurements. We compare voltage data with Signal-to-Noise Ratio (SNR) and Boundary Voltage Changes (BVC), and study image quality with Size Error (SE), Position Error (PE) and Ringing (RNG) parameters, in the case of one-point and two-point simultaneous contact locations. The study shows that, in order to improve the performance of EIT based sensors, the electrode selection strategies should dynamically change correspondingly to the location of the input stimuli. In fact, the selection of one drive pattern over another can improve the target size detection and position accuracy up to 4.7% and 18%, respectively. PMID:28858252
A Quantitative Evaluation of Drive Pattern Selection for Optimizing EIT-Based Stretchable Sensors.
Russo, Stefania; Nefti-Meziani, Samia; Carbonaro, Nicola; Tognetti, Alessandro
2017-08-31
Electrical Impedance Tomography (EIT) is a medical imaging technique that has been recently used to realize stretchable pressure sensors. In this method, voltage measurements are taken at electrodes placed at the boundary of the sensor and are used to reconstruct an image of the applied touch pressure points. The drawback with EIT-based sensors, however, is their low spatial resolution due to the ill-posed nature of the EIT reconstruction. In this paper, we show our performance evaluation of different EIT drive patterns, specifically strategies for electrode selection when performing current injection and voltage measurements. We compare voltage data with Signal-to-Noise Ratio (SNR) and Boundary Voltage Changes (BVC), and study image quality with Size Error (SE), Position Error (PE) and Ringing (RNG) parameters, in the case of one-point and two-point simultaneous contact locations. The study shows that, in order to improve the performance of EIT based sensors, the electrode selection strategies should dynamically change correspondingly to the location of the input stimuli. In fact, the selection of one drive pattern over another can improve the target size detection and position accuracy up to 4.7% and 18%, respectively.
Detection of person borne IEDs using multiple cooperative sensors
NASA Astrophysics Data System (ADS)
MacIntosh, Scott; Deming, Ross; Hansen, Thorkild; Kishan, Neel; Tang, Ling; Shea, Jing; Lang, Stephen
2011-06-01
The use of multiple cooperative sensors for the detection of person borne IEDs is investigated. The purpose of the effort is to evaluate the performance benefits of adding multiple sensor data streams into an aided threat detection algorithm, and a quantitative analysis of which sensor data combinations improve overall detection performance. Testing includes both mannequins and human subjects with simulated suicide bomb devices of various configurations, materials, sizes and metal content. Aided threat recognition algorithms are being developed to test detection performance of individual sensors against combined fused sensors inputs. Sensors investigated include active and passive millimeter wave imaging systems, passive infrared, 3-D profiling sensors and acoustic imaging. The paper describes the experimental set-up and outlines the methodology behind a decision fusion algorithm-based on the concept of a "body model".
NASA Astrophysics Data System (ADS)
Ye, Jiamin; Wang, Haigang; Yang, Wuqiang
2016-07-01
Electrical capacitance tomography (ECT) is based on capacitance measurements from electrode pairs mounted outside of a pipe or vessel. The structure of ECT sensors is vital to image quality. In this paper, issues with the number of electrodes and the electrode covering ratio for complex liquid-solids flows in a rotating device are investigated based on a new coupling simulation model. The number of electrodes is increased from 4 to 32 while the electrode covering ratio is changed from 0.1 to 0.9. Using the coupling simulation method, real permittivity distributions and the corresponding capacitance data at 0, 0.5, 1, 2, 3, 5, and 8 s with a rotation speed of 96 rotations per minute (rpm) are collected. Linear back projection (LBP) and Landweber iteration algorithms are used for image reconstruction. The quality of reconstructed images is evaluated by correlation coefficient compared with the real permittivity distributions obtained from the coupling simulation. The sensitivity for each sensor is analyzed and compared with the correlation coefficient. The capacitance data with a range of signal-to-noise ratios (SNRs) of 45, 50, 55 and 60 dB are generated to evaluate the effect of data noise on the performance of ECT sensors. Furthermore, the SNRs of experimental data are analyzed for a stationary pipe with permittivity distribution. Based on the coupling simulation, 16-electrode ECT sensors are recommended to achieve good image quality.
Autonomous Sensors for Large Scale Data Collection
NASA Astrophysics Data System (ADS)
Noto, J.; Kerr, R.; Riccobono, J.; Kapali, S.; Migliozzi, M. A.; Goenka, C.
2017-12-01
Presented here is a novel implementation of a "Doppler imager" which remotely measures winds and temperatures of the neutral background atmosphere at ionospheric altitudes of 87-300Km and possibly above. Incorporating both recent optical manufacturing developments, modern network awareness and the application of machine learning techniques for intelligent self-monitoring and data classification. This system achieves cost savings in manufacturing, deployment and lifetime operating costs. Deployed in both ground and space-based modalities, this cost-disruptive technology will allow computer models of, ionospheric variability and other space weather models to operate with higher precision. Other sensors can be folded into the data collection and analysis architecture easily creating autonomous virtual observatories. A prototype version of this sensor has recently been deployed in Trivandrum India for the Indian Government. This Doppler imager is capable of operation, even within the restricted CubeSat environment. The CubeSat bus offers a very challenging environment, even for small instruments. The lack of SWaP and the challenging thermal environment demand development of a new generation of instruments; the Doppler imager presented is well suited to this environment. Concurrent with this CubeSat development is the development and construction of ground based arrays of inexpensive sensors using the proposed technology. This instrument could be flown inexpensively on one or more CubeSats to provide valuable data to space weather forecasters and ionospheric scientists. Arrays of magnetometers have been deployed for the last 20 years [Alabi, 2005]. Other examples of ground based arrays include an array of white-light all sky imagers (THEMIS) deployed across Canada [Donovan et al., 2006], oceans sensors on buoys [McPhaden et al., 2010], and arrays of seismic sensors [Schweitzer et al., 2002]. A comparable array of Doppler imagers can be constructed and deployed on the ground, to compliment the CubeSat data.
NASA Astrophysics Data System (ADS)
Xu, Yuanhong; Liu, Jingquan; Zhang, Jizhen; Zong, Xidan; Jia, Xiaofang; Li, Dan; Wang, Erkang
2015-05-01
A portable lab-on-a-chip methodology to generate ionic liquid-functionalized carbon nanodots (CNDs) was developed via electrochemical oxidation of screen printed carbon electrodes. The CNDs can be successfully applied for efficient cell imaging and solid-state electrochemiluminescence sensor fabrication on the paper-based chips.A portable lab-on-a-chip methodology to generate ionic liquid-functionalized carbon nanodots (CNDs) was developed via electrochemical oxidation of screen printed carbon electrodes. The CNDs can be successfully applied for efficient cell imaging and solid-state electrochemiluminescence sensor fabrication on the paper-based chips. Electronic supplementary information (ESI) available: Experimental section; Fig. S1. XPS spectra of the as-prepared CNDs after being dialyzed for 72 hours; Fig. S2. LSCM images showing time-dependent fluorescence signals of HeLa cells treated by the as-prepared CNDs; Tripropylamine analysis using the Nafion/CNDs modified ECL sensor. See DOI: 10.1039/c5nr01765c
Force/torque and tactile sensors for sensor-based manipulator control
NASA Technical Reports Server (NTRS)
Vanbrussel, H.; Belieen, H.; Bao, Chao-Ying
1989-01-01
The autonomy of manipulators, in space and in industrial environments, can be dramatically enhanced by the use of force/torque and tactile sensors. The development and future use of a six-component force/torque sensor for the Hermes Robot Arm (HERA) Basic End-Effector (BEE) is discussed. Then a multifunctional gripper system based on tactile sensors is described. The basic transducing element of the sensor is a sheet of pressure-sensitive polymer. Tactile image processing algorithms for slip detection, object position estimation, and object recognition are described.
Kawahito, Shoji; Seo, Min-Woong
2016-11-06
This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e - rms ) when compared with the CMS gain of two (2.4 e - rms ), or 16 (1.1 e - rms ).
Kawahito, Shoji; Seo, Min-Woong
2016-01-01
This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e−rms) when compared with the CMS gain of two (2.4 e−rms), or 16 (1.1 e−rms). PMID:27827972
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
Transmission-grating-based wavefront tilt sensor.
Iwata, Koichi; Fukuda, Hiroki; Moriwaki, Kousuke
2009-07-10
We propose a new type of tilt sensor. It consists of a grating and an image sensor. It detects the tilt of the collimated wavefront reflected from a plane mirror. Its principle is described and analyzed based on wave optics. Experimental results show its validity. Simulations of the ordinary autocollimator and the proposed tilt sensor show that the effect of noise on the measured angle is smaller for the latter. These results show a possibility of making a smaller and simpler tilt sensor.
Mathematical models and photogrammetric exploitation of image sensing
NASA Astrophysics Data System (ADS)
Puatanachokchai, Chokchai
Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.
Zeng, Youjun; Wang, Lei; Wu, Shu-Yuen; He, Jianan; Qu, Junle; Li, Xuejin; Ho, Ho-Pui; Gu, Dayong; Gao, Bruce Zhi; Shao, Yonghong
2017-01-01
A fast surface plasmon resonance (SPR) imaging biosensor system based on wavelength interrogation using an acousto-optic tunable filter (AOTF) and a white light laser is presented. The system combines the merits of a wide-dynamic detection range and high sensitivity offered by the spectral approach with multiplexed high-throughput data collection and a two-dimensional (2D) biosensor array. The key feature is the use of AOTF to realize wavelength scan from a white laser source and thus to achieve fast tracking of the SPR dip movement caused by target molecules binding to the sensor surface. Experimental results show that the system is capable of completing a SPR dip measurement within 0.35 s. To the best of our knowledge, this is the fastest time ever reported in the literature for imaging spectral interrogation. Based on a spectral window with a width of approximately 100 nm, a dynamic detection range and resolution of 4.63 × 10−2 refractive index unit (RIU) and 1.27 × 10−6 RIU achieved in a 2D-array sensor is reported here. The spectral SPR imaging sensor scheme has the capability of performing fast high-throughput detection of biomolecular interactions from 2D sensor arrays. The design has no mechanical moving parts, thus making the scheme completely solid-state. PMID:28067766
Automatic parquet block sorting using real-time spectral classification
NASA Astrophysics Data System (ADS)
Astrom, Anders; Astrand, Erik; Johansson, Magnus
1999-03-01
This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.
Aquatic Debris Detection Using Embedded Camera Sensors
Wang, Yong; Wang, Dianhong; Lu, Qian; Luo, Dapeng; Fang, Wu
2015-01-01
Aquatic debris monitoring is of great importance to human health, aquatic habitats and water transport. In this paper, we first introduce the prototype of an aquatic sensor node equipped with an embedded camera sensor. Based on this sensing platform, we propose a fast and accurate debris detection algorithm. Our method is specifically designed based on compressive sensing theory to give full consideration to the unique challenges in aquatic environments, such as waves, swaying reflections, and tight energy budget. To upload debris images, we use an efficient sparse recovery algorithm in which only a few linear measurements need to be transmitted for image reconstruction. Besides, we implement the host software and test the debris detection algorithm on realistically deployed aquatic sensor nodes. The experimental results demonstrate that our approach is reliable and feasible for debris detection using camera sensors in aquatic environments. PMID:25647741
pyBSM: A Python package for modeling imaging systems
NASA Astrophysics Data System (ADS)
LeMaster, Daniel A.; Eismann, Michael T.
2017-05-01
There are components that are common to all electro-optical and infrared imaging system performance models. The purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First, performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate performance of an algorithm used to detect objects in an image.
Restoration of out-of-focus images based on circle of confusion estimate
NASA Astrophysics Data System (ADS)
Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto
2002-11-01
In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.
The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors
NASA Astrophysics Data System (ADS)
Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.
2015-12-01
Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and the most popular ones in each category were selected (Arc 3D, Visual SfM, Sure, Agisoft). Also four small objects with distinct geometric properties and especial complexities were chosen and their accurate models as reliable true data was created using ATOS Compact Scan 2M 3D scanner. Images were taken using Fujifilm Real 3D stereo camera, Apple iPhone 5 and Nikon D3200 professional camera and three dimensional models of the objects were obtained using each of the software. Finally, a comprehensive comparison between the detailed reviews of the results on the data set showed that the best combination of software and sensors for generating three-dimensional models is directly related to the object shape as well as the expected accuracy of the final model. Generally better quantitative and qualitative results were obtained by using the Nikon D3200 professional camera, while Fujifilm Real 3D stereo camera and Apple iPhone 5 were the second and third respectively in this comparison. On the other hand, three software of Visual SfM, Sure and Agisoft had a hard competition to achieve the most accurate and complete model of the objects and the best software was different according to the geometric properties of the object.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
Planar and finger-shaped optical tactile sensors for robotic applications
NASA Technical Reports Server (NTRS)
Begej, Stefan
1988-01-01
Progress is described regarding the development of optical tactile sensors specifically designed for application to dexterous robotics. These sensors operate on optical principles involving the frustration of total internal reflection at a waveguide/elastomer interface and produce a grey-scale tactile image that represents the normal (vertical) forces of contact. The first tactile sensor discussed is a compact, 32 x 32 planar sensor array intended for mounting on a parallel-jaw gripper. Optical fibers were employed to convey the tactile image to a CCD camera and microprocessor-based image analysis system. The second sensor had the shape and size of a human fingertip and was designed for a dexterous robotic hand. It contained 256 sensing sites (taxels) distributed in a dual-density pattern that included a tactile fovea near the tip measuring 13 x 13 mm and containing 169 taxels. The design and construction details of these tactile sensors are presented, in addition to photographs of tactile imprints.
Active imaging system performance model for target acquisition
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.
2007-04-01
The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.
Methods for gas detection using stationary hyperspectral imaging sensors
Conger, James L [San Ramon, CA; Henderson, John R [Castro Valley, CA
2012-04-24
According to one embodiment, a method comprises producing a first hyperspectral imaging (HSI) data cube of a location at a first time using data from a HSI sensor; producing a second HSI data cube of the same location at a second time using data from the HSI sensor; subtracting on a pixel-by-pixel basis the second HSI data cube from the first HSI data cube to produce a raw difference cube; calibrating the raw difference cube to produce a calibrated raw difference cube; selecting at least one desired spectral band based on a gas of interest; producing a detection image based on the at least one selected spectral band and the calibrated raw difference cube; examining the detection image to determine presence of the gas of interest; and outputting a result of the examination. Other methods, systems, and computer program products for detecting the presence of a gas are also described.
Blurred Star Image Processing for Star Sensors under Dynamic Conditions
Zhang, Weina; Quan, Wei; Guo, Lei
2012-01-01
The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666
Optical flows method for lightweight agile remote sensor design and instrumentation
NASA Astrophysics Data System (ADS)
Wang, Chong; Xing, Fei; Wang, Hongjian; You, Zheng
2013-08-01
Lightweight agile remote sensors have become one type of the most important payloads and were widely utilized in space reconnaissance and resource survey. These imaging sensors are designed to obtain the high spatial, temporary and spectral resolution imageries. Key techniques in instrumentation include flexible maneuvering, advanced imaging control algorithms and integrative measuring techniques, which are closely correlative or even acting as the bottle-necks for each other. Therefore, mutual restrictive problems must be solved and optimized. Optical flow is the critical model which to be fully represented in the information transferring as well as radiation energy flowing in dynamic imaging. For agile sensors, especially with wide-field-of view, imaging optical flows may distort and deviate seriously when they perform large angle attitude maneuvering imaging. The phenomena are mainly attributed to the geometrical characteristics of the three-dimensional earth surface as well as the coupled effects due to the complicated relative motion between the sensor and scene. Under this circumstance, velocity fields distribute nonlinearly, the imageries may badly be smeared or probably the geometrical structures are changed since the image velocity matching errors are not having been eliminated perfectly. In this paper, precise imaging optical flow model is established for agile remote sensors, for which optical flows evolving is factorized by two forms, which respectively due to translational movement and image shape changing. Moreover, base on that, agile remote sensors instrumentation was investigated. The main techniques which concern optical flow modeling include integrative design with lightweight star sensors along with micro inertial measurement units and corresponding data fusion, the assemblies of focal plane layout and control, imageries post processing for agile remote sensors etc. Some experiments show that the optical analyzing method is effective to eliminate the limitations for the performance indexes, and succeeded to be applied for integrative system design. Finally, a principle prototype of agile remote sensor designed by the method is discussed.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-10-16
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.
Contact CMOS imaging of gaseous oxygen sensor array
Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.
2014-01-01
We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909
Contact CMOS imaging of gaseous oxygen sensor array.
Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V
2011-10-01
We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
Cross calibration of GF-1 satellite wide field of view sensor with Landsat 8 OLI and HJ-1A HSI
NASA Astrophysics Data System (ADS)
Liu, Li; Gao, Hailiang; Pan, Zhiqiang; Gu, Xingfa; Han, Qijin; Zhang, Xuewen
2018-01-01
This paper focuses on cross calibrating the GaoFen (GF-1) satellite wide field of view (WFV) sensor using the Landsat 8 Operational Land Imager (OLI) and HuanJing-1A (HJ-1A) hyperspectral imager (HSI) as reference sensors. Two methods are proposed to calculate the spectral band adjustment factor (SBAF). One is based on the HJ-1A HSI image and the other is based on ground-measured reflectance. However, the HSI image and ground-measured reflectance were measured at different dates, as the WFV and OLI imagers passed overhead. Three groups of regions of interest (ROIs) were chosen for cross calibration, based on different selection criteria. Cross-calibration gains with nonzero and zero offsets were both calculated. The results confirmed that the gains with zero offset were better, as they were more consistent over different groups of ROIs and SBAF calculation methods. The uncertainty of this cross calibration was analyzed, and the influence of SBAF was calculated based on different HSI images and ground reflectance spectra. The results showed that the uncertainty of SBAF was <3% for bands 1 to 3. Two other large uncertainties in this cross calibration were variation of atmosphere and low ground reflectance.
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan
2015-03-01
Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.
NASA Astrophysics Data System (ADS)
Sankey, T.; Donald, J.; McVay, J.
2015-12-01
High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.
High-NA metrology and sensing on Berkeley MET5
NASA Astrophysics Data System (ADS)
Miyakawa, Ryan; Anderson, Chris; Naulleau, Patrick
2017-03-01
In this paper we compare two non-interferometric wavefront sensors suitable for in-situ high-NA EUV optical testing. The first is the AIS sensor, which has been deployed in both inspection and exposure tools. AIS is a compact, optical test that directly measures a wavefront by probing various parts of the imaging optic pupil and measuring localized wavefront curvature. The second is an image-based technique that uses an iterative algorithm based on simulated annealing to reconstruct a wavefront based on matching aerial images through focus. In this technique, customized illumination is used to probe the pupil at specific points to optimize differences in aberration signatures.
Attitude determination for high-accuracy submicroradian jitter pointing on space-based platforms
NASA Astrophysics Data System (ADS)
Gupta, Avanindra A.; van Houten, Charles N.; Germann, Lawrence M.
1990-10-01
A description of the requirement definition process is given for a new wideband attitude determination subsystem (ADS) for image motion compensation (IMC) systems. The subsystem consists of either lateral accelerometers functioning in differential pairs or gas-bearing gyros for high-frequency sensors using CCD-based star trackers for low-frequency sensors. To minimize error the sensor signals are combined so that the mixing filter does not allow phase distortion. The two ADS models are introduced in an IMC simulation to predict measurement error, correction capability, and residual image jitter for a variety of system parameters. The IMC three-axis testbed is utilized to simulate an incoming beam in inertial space. Results demonstrate that both mechanical and electronic IMC meet the requirements of image stabilization for space-based observation at submicroradian-jitter levels. Currently available technology may be employed to implement IMC systems.
Photon counting phosphorescence lifetime imaging with TimepixCam
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; ...
2017-01-12
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam.
Hirvonen, Liisa M; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
Photon counting phosphorescence lifetime imaging with TimepixCam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam
NASA Astrophysics Data System (ADS)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
Radiometric Characterization Results for the IKONOS, Quickbird, and OrbView-3 Sensor
NASA Technical Reports Server (NTRS)
Holekamp, Kara; Aaron, David; Thome, Kurtis
2006-01-01
Radiometric calibration of commercial imaging satellite products is required to ensure that science and application communities better understand commercial imaging satellite properties. Inaccurate radiometric calibrations can lead to erroneous decisions and invalid conclusions and can limit intercomparisons with other systems. To address this calibration need, the NASA Applied Sciences Directorate (ASD) at Stennis Space Center established a commercial satellite imaging radiometric calibration team consisting of three independent groups: NASA ASD, the University of Arizona Remote Sensing Group, and South Dakota State University. Each group independently determined the absolute radiometric calibration coefficients of available high-spatial-resolution commercial 4-band multispectral products, in the visible though near-infrared spectrum, from GeoEye(tradeMark) (formerly SpaceImaging(Registered TradeMark)) IKONOS, DigitalGlobe(Regitered TradeMark) QuickBird, and GeoEye (formerly ORBIMAGE(Registered TradeMark) OrbView. Each team member employed some variant of reflectance-based vicarious calibration approach, requiring ground-based measurements coincident with image acquisitions and radiative transfer calculations. Several study sites throughout the United States that covered a significant portion of the sensor's dynamic range were employed. Satellite at-sensor radiance values were compared to those estimated by each independent team member to evaluate the sensor's radiometric accuracy. The combined results of this evaluation provide the user community with an independent assessment of these sensors' absolute calibration values.
NASA Astrophysics Data System (ADS)
Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir
2014-10-01
In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported by software Graphical Unit Interface (GUI). They were tested and characterized through different kinds of optical systems for imaging applications, super resolution, and calibration methods. Capability of the 16x16 sensor is to employ a chirp radar like method to produced depth and reflectance information in the image. This enables 3-D MMW imaging in real time with video frame rate. In this work we demonstrate different kinds of optical imaging systems. Those systems have capability of 3-D imaging for short range and longer distances to at least 10-20 meters.
A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083
Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface
Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun
2016-01-01
Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging. PMID:27246668
Digital sun sensor multi-spot operation.
Rufino, Giancarlo; Grassi, Michele
2012-11-28
The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.
MOSES: a modular sensor electronics system for space science and commercial applications
NASA Astrophysics Data System (ADS)
Michaelis, Harald; Behnke, Thomas; Tschentscher, Matthias; Mottola, Stefano; Neukum, Gerhard
1999-10-01
The camera group of the DLR--Institute of Space Sensor Technology and Planetary Exploration is developing imaging instruments for scientific and space applications. One example is the ROLIS imaging system of the ESA scientific space mission `Rosetta', which consists of a descent/downlooking and a close-up imager. Both are parts of the Rosetta-Lander payload and will operate in the extreme environment of a cometary nucleus. The Rosetta Lander Imaging System (ROLIS) will introduce a new concept for the sensor electronics, which is referred to as MOSES (Modula Sensor Electronics System). MOSES is a 3D miniaturized CCD- sensor-electronics which is based on single modules. Each of the modules has some flexibility and enables a simple adaptation to specific application requirements. MOSES is mainly designed for space applications where high performance and high reliability are required. This concept, however, can also be used in other science or commercial applications. This paper describes the concept of MOSES, its characteristics, performance and applications.
Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting
Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao
2016-01-01
A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837
NASA Astrophysics Data System (ADS)
Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.
2012-07-01
Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.
Multisource image fusion method using support value transform.
Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen
2007-07-01
With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.
Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao
2016-01-01
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF) technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy. PMID:27897976
Cross-comparison of the IRS-P6 AWiFS sensor with the L5 TM, L7 ETM+, & Terra MODIS sensors
Chander, G.; Xiong, X.; Angal, A.; Choi, T.; Malla, R.
2009-01-01
As scientists and decision makers increasingly rely on multiple Earth-observing satellites to address urgent global issues, it is imperative that they can rely on the accuracy of Earth-observing data products. This paper focuses on the crosscomparison of the Indian Remote Sensing (IRS-P6) Advanced Wide Field Sensor (AWiFS) with the Landsat 5 (L5) Thematic Mapper (TM), Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+), and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. The cross-comparison was performed using image statistics based on large common areas observed by the sensors within 30 minutes. Because of the limited availability of simultaneous observations between the AWiFS and the Landsat and MODIS sensors, only a few images were analyzed. These initial results are presented. Regression curves and coefficients of determination for the top-of-atmosphere (TOA) trends from these sensors were generated to quantify the uncertainty in these relationships and to provide an assessment of the calibration differences between these sensors. ?? 2009 SPIE.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm
You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei
2011-01-01
With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770
Computational multispectral video imaging [Invited].
Wang, Peng; Menon, Rajesh
2018-01-01
Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.
Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications
NASA Astrophysics Data System (ADS)
Carpentieri, Bruno; Pizzolante, Raffaele
2017-12-01
Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.
Carbon nanotube thin film strain sensor models assembled using nano- and micro-scale imaging
NASA Astrophysics Data System (ADS)
Lee, Bo Mi; Loh, Kenneth J.; Yang, Yuan-Sen
2017-07-01
Nanomaterial-based thin films, particularly those based on carbon nanotubes (CNT), have brought forth tremendous opportunities for designing next-generation strain sensors. However, their strain sensing properties can vary depending on fabrication method, post-processing treatment, and types of CNTs and polymers employed. The objective of this study was to derive a CNT-based thin film strain sensor model using inputs from nano-/micro-scale experimental measurements of nanotube physical properties. This study began with fabricating ultra-low-concentration CNT-polymer thin films, followed by imaging them using atomic force microscopy. Image processing was employed for characterizing CNT dispersed shapes, lengths, and other physical attributes, and results were used for building five different types of thin film percolation-based models. Numerical simulations were conducted to assess how the morphology of dispersed CNTs in its 2D matrix affected bulk film electrical and electromechanical (strain sensing) properties. The simulation results showed that CNT morphology had a significant impact on strain sensing performance.
Luminescent sensing and imaging of oxygen: Fierce competition to the Clark electrode
2015-01-01
Luminescence‐based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid‐state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle‐based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. PMID:26113255
Sequential deconvolution from wave-front sensing using bivariate simplex splines
NASA Astrophysics Data System (ADS)
Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai
2015-05-01
Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.
NASA Astrophysics Data System (ADS)
Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.
2016-03-01
Most photoacoustic scanners use piezoelectric detectors but these have two key limitations. Firstly, they are optically opaque, inhibiting backward mode operation. Secondly, it is difficult to achieve adequate detection sensitivity with the small element sizes needed to provide near-omnidirectional response as required for tomographic imaging. Planar Fabry-Perot (FP) ultrasound sensing etalons can overcome both of these limitations and have proved extremely effective for superficial (<1cm) imaging applications. To achieve small element sizes (<100μm), the etalon is illuminated with a focused laser beam. However, this has the disadvantage that beam walk-off due to the divergence of the beam fundamentally limits the etalon finesse and thus sensitivity - in essence, the problem is one of insufficient optical confinement. To overcome this, novel planoconcave micro-resonator sensors have been fabricated using precision ink-jet printed polymer domes with curvatures matching that of the laser wavefront. By providing near-perfect beam confinement, we show that it is possible to approach the maximum theoretical limit for finesse (f) imposed by the etalon mirror reflectivities (e.g. f=400 for R=99.2% in contrast to a typical planar sensor value of f<50). This yields an order of magnitude increase in sensitivity over a planar FP sensor with the same acoustic bandwidth. Furthermore by eliminating beam walk-off, viable sensors can be made with significantly greater thickness than planar FP sensors. This provides an additional sensitivity gain for deep tissue imaging applications such as breast imaging where detection bandwidths in the low MHz can be tolerated. For example, for a 250 μm thick planoconcave sensor with a -3dB bandwidth of 5MHz, the measured NEP was 4 Pa. This NEP is comparable to that provided by mm scale piezoelectric detectors used for breast imaging applications but with more uniform frequency response characteristics and an order-of-magnitude smaller element size. Following previous proof-of-concept work, several important advances towards practical application have been made. A family of sensors with bandwidths ranging from 3MHz to 20MHz have been fabricated and characterised. A novel interrogation scheme based on rapid wavelength sweeping has been implemented in order to avoid previously encountered instability problems due to self-heating. Finally, a prototype microresonator based photoacoustic scanner has been developed and applied to the problem of deep-tissue (>1cm) photoacoustic imaging in vivo. Imaging results for second generation microresonator sensors (with R = 99.5% and thickness up to ~800um) are compared to the best achievable with the planar FP sensors and piezoelectric receivers.
Imaging through turbulence using a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2015-09-01
Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.
Dagamseh, Ahmad; Wiegerink, Remco; Lammerink, Theo; Krijnen, Gijs
2013-01-01
In Nature, fish have the ability to localize prey, school, navigate, etc., using the lateral-line organ. Artificial hair flow sensors arranged in a linear array shape (inspired by the lateral-line system (LSS) in fish) have been applied to measure airflow patterns at the sensor positions. Here, we take advantage of both biomimetic artificial hair-based flow sensors arranged as LSS and beamforming techniques to demonstrate dipole-source localization in air. Modelling and measurement results show the artificial lateral-line ability to image the position of dipole sources accurately with estimation error of less than 0.14 times the array length. This opens up possibilities for flow-based, near-field environment mapping that can be beneficial to, for example, biologists and robot guidance applications. PMID:23594816
Integration of OLEDs in biomedical sensor systems: design and feasibility analysis
NASA Astrophysics Data System (ADS)
Rai, Pratyush; Kumar, Prashanth S.; Varadan, Vijay K.
2010-04-01
Organic (electronic) Light Emitting Diodes (OLEDs) have been shown to have applications in the field of lighting and flexible display. These devices can also be incorporated in sensors as light source for imaging/fluorescence sensing for miniaturized systems for biomedical applications and low-cost displays for sensor output. The current device capability aligns well with the aforementioned applications as low power diffuse lighting and momentary/push button dynamic display. A top emission OLED design has been proposed that can be incorporated with the sensor and peripheral electrical circuitry, also based on organic electronics. Feasibility analysis is carried out for an integrated optical imaging/sensor system, based on luminosity and spectrum band width. A similar study is also carried out for sensor output display system that functions as a pseudo active OLED matrix. A power model is presented for device power requirements and constraints. The feasibility analysis is also supplemented with the discussion about implementation of ink-jet printing and stamping techniques for possibility of roll to roll manufacturing.
Decoding mobile-phone image sensor rolling shutter effect for visible light communications
NASA Astrophysics Data System (ADS)
Liu, Yang
2016-01-01
Optical wireless communication (OWC) using visible lights, also known as visible light communication (VLC), has attracted significant attention recently. As the traditional OWC and VLC receivers (Rxs) are based on PIN photo-diode or avalanche photo-diode, deploying the complementary metal-oxide-semiconductor (CMOS) image sensor as the VLC Rx is attractive since nowadays nearly every person has a smart phone with embedded CMOS image sensor. However, deploying the CMOS image sensor as the VLC Rx is challenging. In this work, we propose and demonstrate two simple contrast ratio (CR) enhancement schemes to improve the contrast of the rolling shutter pattern. Then we describe their processing algorithms one by one. The experimental results show that both the proposed CR enhancement schemes can significantly mitigate the high-intensity fluctuations of the rolling shutter pattern and improve the bit-error-rate performance.
Development of a 750x750 pixels CMOS imager sensor for tracking applications
NASA Astrophysics Data System (ADS)
Larnaudie, Franck; Guardiola, Nicolas; Saint-Pé, Olivier; Vignon, Bruno; Tulet, Michel; Davancens, Robert; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Estribeau, Magali
2017-11-01
Solid-state optical sensors are now commonly used in space applications (navigation cameras, astronomy imagers, tracking sensors...). Although the charge-coupled devices are still widely used, the CMOS image sensor (CIS), which performances are continuously improving, is a strong challenger for Guidance, Navigation and Control (GNC) systems. This paper describes a 750x750 pixels CMOS image sensor that has been specially designed and developed for star tracker and tracking sensor applications. Such detector, that is featuring smart architecture enabling very simple and powerful operations, is built using the AMIS 0.5μm CMOS technology. It contains 750x750 rectangular pixels with 20μm pitch. The geometry of the pixel sensitive zone is optimized for applications based on centroiding measurements. The main feature of this device is the on-chip control and timing function that makes the device operation easier by drastically reducing the number of clocks to be applied. This powerful function allows the user to operate the sensor with high flexibility: measurement of dark level from masked lines, direct access to the windows of interest… A temperature probe is also integrated within the CMOS chip allowing a very precise measurement through the video stream. A complete electro-optical characterization of the sensor has been performed. The major parameters have been evaluated: dark current and its uniformity, read-out noise, conversion gain, Fixed Pattern Noise, Photo Response Non Uniformity, quantum efficiency, Modulation Transfer Function, intra-pixel scanning. The characterization tests are detailed in the paper. Co60 and protons irradiation tests have been also carried out on the image sensor and the results are presented. The specific features of the 750x750 image sensor such as low power CMOS design (3.3V, power consumption<100mW), natural windowing (that allows efficient and robust tracking algorithms), simple proximity electronics (because of the on-chip control and timing function) enabling a high flexibility architecture, make this imager a good candidate for high performance tracking applications.
NASA Astrophysics Data System (ADS)
Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu
2015-04-01
For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.
Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao
2018-03-01
We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.
Hybrid wireless sensor network for rescue site monitoring after earthquake
NASA Astrophysics Data System (ADS)
Wang, Rui; Wang, Shuo; Tang, Chong; Zhao, Xiaoguang; Hu, Weijian; Tan, Min; Gao, Bowei
2016-07-01
This paper addresses the design of a low-cost, low-complexity, and rapidly deployable wireless sensor network (WSN) for rescue site monitoring after earthquakes. The system structure of the hybrid WSN is described. Specifically, the proposed hybrid WSN consists of two kinds of wireless nodes, i.e., the monitor node and the sensor node. Then the mechanism and the system configuration of the wireless nodes are detailed. A transmission control protocol (TCP)-based request-response scheme is proposed to allow several monitor nodes to communicate with the monitoring center. UDP-based image transmission algorithms with fast recovery have been developed to meet the requirements of in-time delivery of on-site monitor images. In addition, the monitor node contains a ZigBee module that used to communicate with the sensor nodes, which are designed with small dimensions to monitor the environment by sensing different physical properties in narrow spaces. By building a WSN using these wireless nodes, the monitoring center can display real-time monitor images of the monitoring area and visualize all collected sensor data on geographic information systems. In the end, field experiments were performed at the Training Base of Emergency Seismic Rescue Troops of China and the experimental results demonstrate the feasibility and effectiveness of the monitor system.
A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer
Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie
2014-01-01
Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727
Non-contact capacitance based image sensing method and system
Novak, James L.; Wiczer, James J.
1995-01-01
A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.
Non-contact capacitance based image sensing method and system
Novak, James L.; Wiczer, James J.
1994-01-01
A system and a method for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.
Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy
NASA Astrophysics Data System (ADS)
Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.
2016-05-01
Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.
A REAL-TIME COAL CONTENT/ORE GRADE (C2OC) SENSOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rand Swanson
2005-04-01
This is the final report of a three year DOE funded project titled ''A real-time coal content/ore grade (C{sub 2}OG) sensor''. The sensor, which is based on hyperspectral imaging technology, was designed to give a machine vision assay of ore or coal. Sensors were designed and built at Resonon, Inc., and then deployed at the Stillwater Mining Company core room in southcentral Montana for analyzing platinum/palladium ore and at the Montana Tech Spectroscopy Lab for analyzing coal and other materials. The Stillwater sensor imaged 91' of core and analyzed this data for surface sulfides which are considered to be pathfindermore » minerals for platinum/palladium at this mine. Our results indicate that the sensor could deliver a relative ore grade provided tool markings and iron oxidation were kept to a minimum. Coal, talc, and titanium sponge samples were also imaged and analyzed for content and grade with promising results. This research has led directly to a DOE SBIR Phase II award for Resonon to develop a down-hole imaging spectrometer based on the same imaging technology used in the Stillwater core room C{sub 2}OG sensor. The Stillwater Mining Company has estimated that this type of imaging system could lead to a 10% reduction in waste rock from their mine and provide a $650,000 benefit per year. The proposed system may also lead to an additional 10% of ore tonnage, which would provide a total economic benefit of more than $3.1 million per year. If this benefit could be realized on other metal ores for which the proposed technology is suitable, the possible economic benefits to U.S. mines is over $70 million per year. In addition to these currently lost economic benefits, there are also major energy losses from mining waste rock and environmental impacts from mining, processing, and disposing of waste rock.« less
Evaluation of excitation strategy with multi-plane electrical capacitance tomography sensor
NASA Astrophysics Data System (ADS)
Mao, Mingxu; Ye, Jiamin; Wang, Haigang; Zhang, Jiaolong; Yang, Wuqiang
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technique for measuring the permittivity change of materials. Using a multi-plane ECT sensor, three-dimensional (3D) distribution of permittivity may be represented. In this paper, three excitation strategies, including single-electrode excitation, dual-electrode excitation in the same plane, and dual-electrode excitation in different planes are investigated by numerical simulation and experiment for two three-plane ECT sensors with 12 electrodes in total. In one sensor, the electrodes on the middle plane are in line with the others. In the other sensor, they are rotated 45° with reference to the other two planes. A linear back projection algorithm is used to reconstruct the images and a correlation coefficient is used to evaluate the image quality. The capacitance data and sensitivity distribution with each measurement strategy and sensor model are analyzed. Based on simulation and experimental results using noise-free and noisy capacitance data, the performance of the three strategies is evaluated.
NASA Astrophysics Data System (ADS)
Näthe, Paul; Becker, Rolf
2014-05-01
Soil moisture and plant available water are important environmental parameters that affect plant growth and crop yield. Hence, they are significant parameters for vegetation monitoring and precision agriculture. However, validation through ground-based soil moisture measurements is necessary for accessing soil moisture, plant canopy temperature, soil temperature and soil roughness with airborne hyperspectral imaging systems in a corresponding hyperspectral imaging campaign as a part of the INTERREG IV A-Project SMART INSPECTORS. At this point, commercially available sensors for matric potential, plant available water and volumetric water content are utilized for automated measurements with smart sensor nodes which are developed on the basis of open-source 868MHz radio modules, featuring a full-scale microcontroller unit that allows an autarkic operation of the sensor nodes on batteries in the field. The generated data from each of these sensor nodes is transferred wirelessly with an open-source protocol to a central node, the so-called "gateway". This gateway collects, interprets and buffers the sensor readings and, eventually, pushes the data-time series onto a server-based database. The entire data processing chain from the sensor reading to the final storage of data-time series on a server is realized with open-source hardware and software in such a way that the recorded data can be accessed from anywhere through the internet. It will be presented how this open-source based wireless sensor network is developed and specified for the application of ground truthing. In addition, the system's perspectives and potentials with respect to usability and applicability for vegetation monitoring and precision agriculture shall be pointed out. Regarding the corresponding hyperspectral imaging campaign, results from ground measurements will be discussed in terms of their contributing aspects to the remote sensing system. Finally, the significance of the wireless sensor network for the application of ground truthing shall be determined.
A High Fidelity Approach to Data Simulation for Space Situational Awareness Missions
NASA Astrophysics Data System (ADS)
Hagerty, S.; Ellis, H., Jr.
2016-09-01
Space Situational Awareness (SSA) is vital to maintaining our Space Superiority. A high fidelity, time-based simulation tool, PROXOR™ (Proximity Operations and Rendering), supports SSA by generating realistic mission scenarios including sensor frame data with corresponding truth. This is a unique and critical tool for supporting mission architecture studies, new capability (algorithm) development, current/future capability performance analysis, and mission performance prediction. PROXOR™ provides a flexible architecture for sensor and resident space object (RSO) orbital motion and attitude control that simulates SSA, rendezvous and proximity operations scenarios. The major elements of interest are based on the ability to accurately simulate all aspects of the RSO model, viewing geometry, imaging optics, sensor detector, and environmental conditions. These capabilities enhance the realism of mission scenario models and generated mission image data. As an input, PROXOR™ uses a library of 3-D satellite models containing 10+ satellites, including low-earth orbit (e.g., DMSP) and geostationary (e.g., Intelsat) spacecraft, where the spacecraft surface properties are those of actual materials and include Phong and Maxwell-Beard bidirectional reflectance distribution function (BRDF) coefficients for accurate radiometric modeling. We calculate the inertial attitude, the changing solar and Earth illumination angles of the satellite, and the viewing angles from the sensor as we propagate the RSO in its orbit. The synthetic satellite image is rendered at high resolution and aggregated to the focal plane resolution resulting in accurate radiometry even when the RSO is a point source. The sensor model includes optical effects from the imaging system [point spread function (PSF) includes aberrations, obscurations, support structures, defocus], detector effects (CCD blooming, left/right bias, fixed pattern noise, image persistence, shot noise, read noise, and quantization noise), and environmental effects (radiation hits with selectable angular distributions and 4-layer atmospheric turbulence model for ground based sensors). We have developed an accurate flash Light Detection and Ranging (LIDAR) model that supports reconstruction of 3-dimensional information on the RSO. PROXOR™ contains many important imaging effects such as intra-frame smear, realized by oversampling the image in time and capturing target motion and jitter during the integration time.
Hakala, Teemu; Markelin, Lauri; Honkavaara, Eija; Scott, Barry; Theocharous, Theo; Nevalainen, Olli; Näsi, Roope; Suomalainen, Juha; Viljanen, Niko; Greenwell, Claire; Fox, Nigel
2018-05-03
Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK).
Hakala, Teemu; Scott, Barry; Theocharous, Theo; Näsi, Roope; Suomalainen, Juha; Greenwell, Claire; Fox, Nigel
2018-01-01
Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK). PMID:29751560
No scanning depth imaging system based on TOF
NASA Astrophysics Data System (ADS)
Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo
2016-03-01
To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.
Combined imaging and chemical sensing using a single optical imaging fiber.
Bronk, K S; Michael, K L; Pantano, P; Walt, D R
1995-09-01
Despite many innovations and developments in the field of fiber-optic chemical sensors, optical fibers have not been employed to both view a sample and concurrently detect an analyte of interest. While chemical sensors employing a single optical fiber or a noncoherent fiberoptic bundle have been applied to a wide variety of analytical determinations, they cannot be used for imaging. Similarly, coherent imaging fibers have been employed only for their originally intended purpose, image transmission. We herein report a new technique for viewing a sample and measuring surface chemical concentrations that employs a coherent imaging fiber. The method is based on the deposition of a thin, analyte-sensitive polymer layer on the distal surface of a 350-microns-diameter imaging fiber. We present results from a pH sensor array and an acetylcholine biosensor array, each of which contains approximately 6000 optical sensors. The acetylcholine biosensor has a detection limit of 35 microM and a fast (< 1 s) response time. In association with an epifluorescence microscope and a charge-coupled device, these modified imaging fibers can display visual information of a remote sample with 4-microns spatial resolution, allowing for alternating acquisition of both chemical analysis and visual histology.
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
NASA Astrophysics Data System (ADS)
Åström, Anders; Forchheimer, Robert
2012-03-01
Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.
DNAzyme sensors for detection of metal ions in the environment and imaging them in living cells
McGhee, Claire E.; Loh, Kang Yong
2017-01-01
The on-site and real-time detection of metal ions is important for environmental monitoring and for understanding the impact of metal ions on human health. However, developing sensors selective for a wide range of metal ions that can work in the complex matrices of untreated samples and cells presents significant challenges. To meet these challenges, DNAzymes, an emerging class of metal ion-dependent enzymes selective for almost any metal ion, have been functionalized with fluorophores, nanoparticles and other imaging agents and incorporated into sensors for the detection of metal ions in environmental samples and for imaging the metal ions in living cells. Herein, we highlight the recent developments of DNAzyme-based fluorescent, colorimetric, SERS, electrochemical and electrochemiluminscent sensors for metal ions for these applications. PMID:28458112
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang
1994-01-01
This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.
Robust optical sensors for safety critical automotive applications
NASA Astrophysics Data System (ADS)
De Locht, Cliff; De Knibber, Sven; Maddalena, Sam
2008-02-01
Optical sensors for the automotive industry need to be robust, high performing and low cost. This paper focuses on the impact of automotive requirements on optical sensor design and packaging. Main strategies to lower optical sensor entry barriers in the automotive market include: Perform sensor calibration and tuning by the sensor manufacturer, sensor test modes on chip to guarantee functional integrity at operation, and package technology is key. As a conclusion, optical sensor applications are growing in automotive. Optical sensor robustness matured to the level of safety critical applications like Electrical Power Assisted Steering (EPAS) and Drive-by-Wire by optical linear arrays based systems and Automated Cruise Control (ACC), Lane Change Assist and Driver Classification/Smart Airbag Deployment by camera imagers based systems.
Hyperspectral imaging simulation of object under sea-sky background
NASA Astrophysics Data System (ADS)
Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui
2016-10-01
Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.
Trigger and Readout System for the Ashra-1 Detector
NASA Astrophysics Data System (ADS)
Aita, Y.; Aoki, T.; Asaoka, Y.; Morimoto, Y.; Motz, H. M.; Sasaki, M.; Abiko, C.; Kanokohata, C.; Ogawa, S.; Shibuya, H.; Takada, T.; Kimura, T.; Learned, J. G.; Matsuno, S.; Kuze, S.; Binder, P. M.; Goldman, J.; Sugiyama, N.; Watanabe, Y.
Highly sophisticated trigger and readout system has been developed for All-sky Survey High Resolution Air-shower (Ashra) detector. Ashra-1 detector has 42 degree diameter field of view. Detection of Cherenkov and fluorescence light from large background in the large field of view requires finely segmented and high speed trigger and readout system. The system is composed of optical fiber image transmission system, 64 × 64 channel trigger sensor and FPGA based trigger logic processor. The system typically processes the image within 10 to 30 ns and opens the shutter on the fine CMOS sensor. 64 × 64 coarse split image is transferred via 64 × 64 precisely aligned optical fiber bundle to a photon sensor. Current signals from the photon sensor are discriminated by custom made trigger amplifiers. FPGA based processor processes 64 × 64 hit pattern and correspondent partial area of the fine image is acquired. Commissioning earth skimming tau neutrino observational search was carried out with this trigger system. In addition to the geometrical advantage of the Ashra observational site, the excellent tau shower axis measurement based on the fine imaging and the night sky background rejection based on the fine and fast imaging allow zero background tau shower search. Adoption of the optical fiber bundle and trigger LSI realized 4k channel trigger system cheaply. Detectability of tau shower is also confirmed by simultaneously observed Cherenkov air shower. Reduction of the trigger threshold appears to enhance the effective area especially in PeV tau neutrino energy region. New two dimensional trigger LSI was introduced and the trigger threshold was lowered. New calibration system of the trigger system was recently developed and introduced to the Ashra detector
A Universal Vacant Parking Slot Recognition System Using Sensors Mounted on Off-the-Shelf Vehicles.
Suhr, Jae Kyu; Jung, Ho Gi
2018-04-16
An automatic parking system is an essential part of autonomous driving, and it starts by recognizing vacant parking spaces. This paper proposes a method that can recognize various types of parking slot markings in a variety of lighting conditions including daytime, nighttime, and underground. The proposed method can readily be commercialized since it uses only those sensors already mounted on off-the-shelf vehicles: an around-view monitor (AVM) system, ultrasonic sensors, and in-vehicle motion sensors. This method first detects separating lines by extracting parallel line pairs from AVM images. Parking slot candidates are generated by pairing separating lines based on the geometric constraints of the parking slot. These candidates are confirmed by recognizing their entrance positions using line and corner features and classifying their occupancies using ultrasonic sensors. For more reliable recognition, this method uses the separating lines and parking slots not only found in the current image but also found in previous images by tracking their positions using the in-vehicle motion-sensor-based vehicle odometry. The proposed method was quantitatively evaluated using a dataset obtained during the day, night, and underground, and it outperformed previous methods by showing a 95.24% recall and a 97.64% precision.
A Universal Vacant Parking Slot Recognition System Using Sensors Mounted on Off-the-Shelf Vehicles
2018-01-01
An automatic parking system is an essential part of autonomous driving, and it starts by recognizing vacant parking spaces. This paper proposes a method that can recognize various types of parking slot markings in a variety of lighting conditions including daytime, nighttime, and underground. The proposed method can readily be commercialized since it uses only those sensors already mounted on off-the-shelf vehicles: an around-view monitor (AVM) system, ultrasonic sensors, and in-vehicle motion sensors. This method first detects separating lines by extracting parallel line pairs from AVM images. Parking slot candidates are generated by pairing separating lines based on the geometric constraints of the parking slot. These candidates are confirmed by recognizing their entrance positions using line and corner features and classifying their occupancies using ultrasonic sensors. For more reliable recognition, this method uses the separating lines and parking slots not only found in the current image but also found in previous images by tracking their positions using the in-vehicle motion-sensor-based vehicle odometry. The proposed method was quantitatively evaluated using a dataset obtained during the day, night, and underground, and it outperformed previous methods by showing a 95.24% recall and a 97.64% precision. PMID:29659512
Choi, Insub; Kim, JunHee; Kim, Donghyun
2016-12-08
Existing vision-based displacement sensors (VDSs) extract displacement data through changes in the movement of a target that is identified within the image using natural or artificial structure markers. A target-less vision-based displacement sensor (hereafter called "TVDS") is proposed. It can extract displacement data without targets, which then serve as feature points in the image of the structure. The TVDS can extract and track the feature points without the target in the image through image convex hull optimization, which is done to adjust the threshold values and to optimize them so that they can have the same convex hull in every image frame and so that the center of the convex hull is the feature point. In addition, the pixel coordinates of the feature point can be converted to physical coordinates through a scaling factor map calculated based on the distance, angle, and focal length between the camera and target. The accuracy of the proposed scaling factor map was verified through an experiment in which the diameter of a circular marker was estimated. A white-noise excitation test was conducted, and the reliability of the displacement data obtained from the TVDS was analyzed by comparing the displacement data of the structure measured with a laser displacement sensor (LDS). The dynamic characteristics of the structure, such as the mode shape and natural frequency, were extracted using the obtained displacement data, and were compared with the numerical analysis results. TVDS yielded highly reliable displacement data and highly accurate dynamic characteristics, such as the natural frequency and mode shape of the structure. As the proposed TVDS can easily extract the displacement data even without artificial or natural markers, it has the advantage of extracting displacement data from any portion of the structure in the image.
Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan
2015-01-01
An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors.
Wang, Shuang; Geng, Yunhai; Jin, Rongyu
2015-12-12
In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF) and Least Square Methods (LSM) is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.
Towards a framework for agent-based image analysis of remote-sensing data
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-01-01
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916
Towards a framework for agent-based image analysis of remote-sensing data.
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-04-03
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).
A design of endoscopic imaging system for hyper long pipeline based on wheeled pipe robot
NASA Astrophysics Data System (ADS)
Zheng, Dongtian; Tan, Haishu; Zhou, Fuqiang
2017-03-01
An endoscopic imaging system of hyper long pipeline is designed to acquire the inner surface image in advance for the hyper long pipeline detects measurement. The system consists of structured light sensors, pipe robots and control system. The pipe robot is in the form of wheel structure, with the sensor which is at the front of the vehicle body. The control system is at the tail of the vehicle body in the form of upper and lower computer. The sensor can be translated and scanned in three steps: walking, lifting and scanning, then the inner surface image can be acquired at a plurality of positions and different angles. The results of imaging experiments show that the system's transmission distance is longer, the acquisition angle is more diverse and the result is more comprehensive than the traditional imaging system, which lays an important foundation for later inner surface vision measurement.
Herrera, Pedro Javier; Pajares, Gonzalo; Guijarro, Maria; Ruz, José J.; Cruz, Jesús M.; Montes, Fernando
2009-01-01
This paper describes a novel feature-based stereovision matching process based on a pair of omnidirectional images in forest stands acquired with a stereovision sensor equipped with fish-eye lenses. The stereo analysis problem consists of the following steps: image acquisition, camera modelling, feature extraction, image matching and depth determination. Once the depths of significant points on the trees are obtained, the growing stock volume can be estimated by considering the geometrical camera modelling, which is the final goal. The key steps are feature extraction and image matching. This paper is devoted solely to these two steps. At a first stage a segmentation process extracts the trunks, which are the regions used as features, where each feature is identified through a set of attributes of properties useful for matching. In the second step the features are matched based on the application of the following four well known matching constraints, epipolar, similarity, ordering and uniqueness. The combination of the segmentation and matching processes for this specific kind of sensors make the main contribution of the paper. The method is tested with satisfactory results and compared against the human expert criterion. PMID:22303134
NASA Astrophysics Data System (ADS)
Ratliff, Bradley M.; LeMaster, Daniel A.
2012-06-01
Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.
NASA Astrophysics Data System (ADS)
Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.
2014-06-01
In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.
A New Dusts Sensor for Cultural Heritage Applications Based on Image Processing
Proietti, Andrea; Leccese, Fabio; Caciotta, Maurizio; Morresi, Fabio; Santamaria, Ulderico; Malomo, Carmela
2014-01-01
In this paper, we propose a new sensor for the detection and analysis of dusts (seen as powders and fibers) in indoor environments, especially designed for applications in the field of Cultural Heritage or in other contexts where the presence of dust requires special care (surgery, clean rooms, etc.). The presented system relies on image processing techniques (enhancement, noise reduction, segmentation, metrics analysis) and it allows obtaining both qualitative and quantitative information on the accumulation of dust. This information aims to identify the geometric and topological features of the elements of the deposit. The curators can use this information in order to design suitable prevention and maintenance actions for objects and environments. The sensor consists of simple and relatively cheap tools, based on a high-resolution image acquisition system, a preprocessing software to improve the captured image and an analysis algorithm for the feature extraction and the classification of the elements of the dust deposit. We carried out some tests in order to validate the system operation. These tests were performed within the Sistine Chapel in the Vatican Museums, showing the good performance of the proposed sensor in terms of execution time and classification accuracy. PMID:24901977
NASA Technical Reports Server (NTRS)
Storey, James; Roy, David P.; Masek, Jeffrey; Gascon, Ferran; Dwyer, John; Choate, Michael
2016-01-01
The Landsat-8 and Sentinel-2 sensors provide multi-spectral image data with similar spectral and spatial characteristics that together provide improved temporal coverage globally. Both systems are designed to register Level 1 products to a reference image framework, however, the Landsat-8 framework, based upon the Global Land Survey images, contains residual geolocation errors leading to an expected sensor-to-sensor misregistration of 38 m (2sigma). These misalignments vary geographically but should be stable for a given area. The Landsat framework will be readjusted for consistency with the Sentinel-2 Global Reference Image, with completion expected in 2018. In the interim, users can measure Landsat-to-Sentinel tie points to quantify the misalignment in their area of interest and if appropriate to reproject the data to better alignment.
Storey, James C.; Roy, David P.; Masek, Jeffrey; Gascon, Ferran; Dwyer, John L.; Choate, Michael J.
2016-01-01
The Landsat-8 and Sentinel-2 sensors provide multi-spectral image data with similar spectral and spatial characteristics that together provide improved temporal coverage globally. Both systems are designed to register Level 1 products to a reference image framework, however, the Landsat-8 framework, based upon the Global Land Survey images, contains residual geolocation errors leading to an expected sensor-to-sensor misregistration of 38 m (2σ). These misalignments vary geographically but should be stable for a given area. The Landsat framework will be readjusted for consistency with the Sentinel-2 Global Reference Image, with completion expected in 2018. In the interim, users can measure Landsat-to-Sentinel tie points to quantify the misalignment in their area of interest and if appropriate to reproject the data to better alignment.
Object Acquisition and Tracking for Space-Based Surveillance
1991-11-27
on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect , and can...smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
NASA Astrophysics Data System (ADS)
Liu, Jing; Chen, Wei; Wang, Zujun; Xue, Yuanyuan; Yao, Zhibin; He, Baoping; Ma, Wuying; Jin, Junshan; Sheng, Jiangkun; Dong, Guantao
2017-06-01
This paper presents an investigation of total ionizing dose (TID) induced image lag sources in pinned photodiodes (PPD) CMOS image sensors based on radiation experiments and TCAD simulation. The radiation experiments have been carried out at the Cobalt -60 gamma-ray source. The experimental results show the image lag degradation is more and more serious with increasing TID. Combining with the TCAD simulation results, we can confirm that the junction of PPD and transfer gate (TG) is an important region forming image lag during irradiation. These simulations demonstrate that TID can generate a potential pocket leading to incomplete transfer.
Adaptive wavefront sensor based on the Talbot phenomenon.
Podanchuk, Dmytro V; Goloborodko, Andrey A; Kotov, Myhailo M; Kovalenko, Andrey V; Kurashov, Vitalij N; Dan'ko, Volodymyr P
2016-04-20
A new adaptive method of wavefront sensing is proposed and demonstrated. The method is based on the Talbot self-imaging effect, which is observed in an illuminating light beam with strong second-order aberration. Compensation of defocus and astigmatism is achieved with an appropriate choice of size of the rectangular unit cell of the diffraction grating, which is performed iteratively. A liquid-crystal spatial light modulator is used for this purpose. Self-imaging of rectangular grating in the astigmatic light beam is demonstrated experimentally. High-order aberrations are detected with respect to the compensated second-order aberration. The comparative results of wavefront sensing with a Shack-Hartmann sensor and the proposed sensor are adduced.
Compact, self-contained enhanced-vision system (EVS) sensor simulator
NASA Astrophysics Data System (ADS)
Tiana, Carlo
2007-04-01
We describe the model SIM-100 PC-based simulator, for imaging sensors used, or planned for use, in Enhanced Vision System (EVS) applications. Typically housed in a small-form-factor PC, it can be easily integrated into existing out-the-window visual simulators for fixed-wing or rotorcraft, to add realistic sensor imagery to the simulator cockpit. Multiple bands of infrared (short-wave, midwave, extended-midwave and longwave) as well as active millimeter-wave RADAR systems can all be simulated in real time. Various aspects of physical and electronic image formation and processing in the sensor are accurately (and optionally) simulated, including sensor random and fixed pattern noise, dead pixels, blooming, B-C scope transformation (MMWR). The effects of various obscurants (fog, rain, etc.) on the sensor imagery are faithfully represented and can be selected by an operator remotely and in real-time. The images generated by the system are ideally suited for many applications, ranging from sensor development engineering tradeoffs (Field Of View, resolution, etc.), to pilot familiarization and operational training, and certification support. The realistic appearance of the simulated images goes well beyond that of currently deployed systems, and beyond that required by certification authorities; this level of realism will become necessary as operational experience with EVS systems grows.
Study on an agricultural environment monitoring server system using Wireless Sensor Networks.
Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun
2010-01-01
This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.
High-speed uncooled MWIR hostile fire indication sensor
NASA Astrophysics Data System (ADS)
Zhang, L.; Pantuso, F. P.; Jin, G.; Mazurenko, A.; Erdtmann, M.; Radhakrishnan, S.; Salerno, J.
2011-06-01
Hostile fire indication (HFI) systems require high-resolution sensor operation at extremely high speeds to capture hostile fire events, including rocket-propelled grenades, anti-aircraft artillery, heavy machine guns, anti-tank guided missiles and small arms. HFI must also be conducted in a waveband with large available signal and low background clutter, in particular the mid-wavelength infrared (MWIR). The shortcoming of current HFI sensors in the MWIR is the bandwidth of the sensor is not sufficient to achieve the required frame rate at the high sensor resolution. Furthermore, current HFI sensors require cryogenic cooling that contributes to size, weight, and power (SWAP) in aircraft-mounted applications where these factors are at a premium. Based on its uncooled photomechanical infrared imaging technology, Agiltron has developed a low-SWAP, high-speed MWIR HFI sensor that breaks the bandwidth bottleneck typical of current infrared sensors. This accomplishment is made possible by using a commercial-off-the-shelf, high-performance visible imager as the readout integrated circuit and physically separating this visible imager from the MWIR-optimized photomechanical sensor chip. With this approach, we have achieved high-resolution operation of our MWIR HFI sensor at 1000 fps, which is unprecedented for an uncooled infrared sensor. We have field tested our MWIR HFI sensor for detecting all hostile fire events mentioned above at several test ranges under a wide range of environmental conditions. The field testing results will be presented.
Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors
Perenzoni, Matteo; Pancheri, Lucio; Stoppa, David
2016-01-01
This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm) and high fill factor (>20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved. PMID:27223284
Development of plenoptic infrared camera using low dimensional material based photodetectors
NASA Astrophysics Data System (ADS)
Chen, Liangliang
Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and expressed in compressive approach. The following computational algorithms are applied to reconstruct images beyond 2D static information. The super resolution signal processing was then used to enhance and improve the image spatial resolution. The whole camera system brings a deeply detailed content for infrared spectrum sensing.
NASA Astrophysics Data System (ADS)
Jang, Yoon Hee; Chung, Kyungwha; Quan, Li Na; Špačková, Barbora; Šípová, Hana; Moon, Seyoung; Cho, Won Joon; Shin, Hae-Young; Jang, Yu Jin; Lee, Ji-Eun; Kochuveedu, Saji Thomas; Yoon, Min Ji; Kim, Jihyeon; Yoon, Seokhyun; Kim, Jin Kon; Kim, Donghyun; Homola, Jiří; Kim, Dong Ha
2013-11-01
Nanopatterned 2-dimensional Au nanocluster arrays with controlled configuration are fabricated onto reconstructed nanoporous poly(styrene-block-vinylpyridine) inverse micelle monolayer films. Near-field coupling of localized surface plasmons is studied and compared for disordered and ordered core-centered Au NC arrays. Differences in evolution of the absorption band and field enhancement upon Au nanoparticle adsorption are shown. The experimental results are found to be in good agreement with theoretical studies based on the finite-difference time-domain method and rigorous coupled-wave analysis. The realized Au nanopatterns are exploited as substrates for surface-enhanced Raman scattering and integrated into Kretschmann-type SPR sensors, based on which unprecedented SPR-coupling-type sensors are demonstrated.Nanopatterned 2-dimensional Au nanocluster arrays with controlled configuration are fabricated onto reconstructed nanoporous poly(styrene-block-vinylpyridine) inverse micelle monolayer films. Near-field coupling of localized surface plasmons is studied and compared for disordered and ordered core-centered Au NC arrays. Differences in evolution of the absorption band and field enhancement upon Au nanoparticle adsorption are shown. The experimental results are found to be in good agreement with theoretical studies based on the finite-difference time-domain method and rigorous coupled-wave analysis. The realized Au nanopatterns are exploited as substrates for surface-enhanced Raman scattering and integrated into Kretschmann-type SPR sensors, based on which unprecedented SPR-coupling-type sensors are demonstrated. Electronic supplementary information (ESI) available: TEM image and UV-vis absorption spectrum of citrate-capped Au NPs, AFM images of Au NC arrays on the PS-b-P4VP (41k-24k) template, ImageJ-analyzed results of PS-b-P4VP (41k-24k)-templated Au NC arrays, calculated %-surface coverage values, SEM images of Au NC arrays on the PS-b-P2VP (172k-42k) template for SPR biosensing, corresponding ImageJ-analyzed images by varying the Au NP deposition time and results of image analysis. See DOI: 10.1039/c3nr03860b
Event-Based Tone Mapping for Asynchronous Time-Based Image Sensor
Simon Chane, Camille; Ieng, Sio-Hoi; Posch, Christoph; Benosman, Ryad B.
2016-01-01
The asynchronous time-based neuromorphic image sensor ATIS is an array of autonomously operating pixels able to encode luminance information with an exceptionally high dynamic range (>143 dB). This paper introduces an event-based methodology to display data from this type of event-based imagers, taking into account the large dynamic range and high temporal accuracy that go beyond available mainstream display technologies. We introduce an event-based tone mapping methodology for asynchronously acquired time encoded gray-level data. A global and a local tone mapping operator are proposed. Both are designed to operate on a stream of incoming events rather than on time frame windows. Experimental results on real outdoor scenes are presented to evaluate the performance of the tone mapping operators in terms of quality, temporal stability, adaptation capability, and computational time. PMID:27642275
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-01-01
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-08-21
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.
NASA Astrophysics Data System (ADS)
Alghamdi, N. A.; Hankiewicz, J. H.; Anderson, N. R.; Stupic, K. F.; Camley, R. E.; Przybylski, M.; Żukrowski, J.; Celinski, Z.
2018-05-01
We investigate the use of Cu1 -xZnxFe2O4 ferrites (0.60
Fluorescence enhancement of photoswitchable metal ion sensors
NASA Astrophysics Data System (ADS)
Sylvia, Georgina; Heng, Sabrina; Abell, Andrew D.
2016-12-01
Spiropyran-based fluorescence sensors are an ideal target for intracellular metal ion sensing, due to their biocompatibility, red emission frequency and photo-controlled reversible analyte binding for continuous signal monitoring. However, increasing the brightness of spiropyran-based sensors would extend their sensing capability for live-cell imaging. In this work we look to enhance the fluorescence of spiropyran-based sensors, by incorporating an additional fluorophore into the sensor design. We report a 5-membered monoazacrown bearing spiropyran with metal ion specificity, modified to incorporate the pyrene fluorophore. The effect of N-indole pyrene modification on the behavior of the spiropyran molecule is explored, with absorbance and fluorescence emission characterization. This first generation sensor provides an insight into fluorescence-enhancement of spiropyran molecules.
Fluorescence Intensity- and Lifetime-Based Glucose Sensing Using Glucose/Galactose-Binding Protein
Pickup, John C.; Khan, Faaizah; Zhi, Zheng-Liang; Coulter, Jonathan; Birch, David J. S.
2013-01-01
We review progress in our laboratories toward developing in vivo glucose sensors for diabetes that are based on fluorescence labeling of glucose/galactose-binding protein. Measurement strategies have included both monitoring glucose-induced changes in fluorescence resonance energy transfer and labeling with the environmentally sensitive fluorophore, badan. Measuring fluorescence lifetime rather than intensity has particular potential advantages for in vivo sensing. A prototype fiber-optic-based glucose sensor using this technology is being tested.Fluorescence technique is one of the major solutions for achieving the continuous and noninvasive glucose sensor for diabetes. In this article, a highly sensitive nanostructured sensor is developed to detect extremely small amounts of aqueous glucose by applying fluorescence energy transfer (FRET). A one-pot method is applied to produce the dextran-fluorescein isothiocyanate (FITC)-conjugating mesoporous silica nanoparticles (MSNs), which afterward interact with the tetramethylrhodamine isothiocyanate (TRITC)-labeled concanavalin A (Con A) to form the FRET nanoparticles (FITC-dextran-Con A-TRITC@MSNs). The nanostructured glucose sensor is then formed via the self-assembly of the FRET nanoparticles on a transparent, flexible, and biocompatible substrate, e.g., poly(dimethylsiloxane). Our results indicate the diameter of the MSNs is 60 ± 5 nm. The difference in the images before and after adding 20 μl of glucose (0.10 mmol/liter) on the FRET sensor can be detected in less than 2 min by the laser confocal laser scanning microscope. The correlation between the ratio of fluorescence intensity, I(donor)/I(acceptor), of the FRET sensor and the concentration of aqueous glucose in the range of 0.04–4 mmol/liter has been investigated; a linear relationship is found. Furthermore, the durability of the nanostructured FRET sensor is evaluated for 5 days. In addition, the recorded images can be converted to digital images by obtaining the pixels from the resulting matrix using Matlab image processing functions. We have also studied the in vitro cytotoxicity of the device. The nanostructured FRET sensor may provide an alternative method to help patients manage the disease continuously. PMID:23439161
Changing requirements and solutions for unattended ground sensors
NASA Astrophysics Data System (ADS)
Prado, Gervasio; Johnson, Robert
2007-10-01
Unattended Ground Sensors (UGS) were first used to monitor Viet Cong activity along the Ho Chi Minh Trail in the 1960's. In the 1980's, significant improvement in the capabilities of UGS became possible with the development of digital signal processors; this led to their use as fire control devices for smart munitions (for example: the Wide Area Mine) and later to monitor the movements of mobile missile launchers. In these applications, the targets of interest were large military vehicles with strong acoustic, seismic and magnetic signatures. Currently, the requirements imposed by new terrorist threats and illegal border crossings have changed the emphasis to the monitoring of light vehicles and foot traffic. These new requirements have changed the way UGS are used. To improve performance against targets with lower emissions, sensors are used in multi-modal arrangements. Non-imaging sensors (acoustic, seismic, magnetic and passive infrared) are now being used principally as activity sensors to cue imagers and remote cameras. The availability of better imaging technology has made imagers the preferred source of "actionable intelligence". Infrared cameras are now based on un-cooled detector-arrays that have made their application in UGS possible in terms of their cost and power consumption. Visible light imagers are also more sensitive extending their utility well beyond twilight. The imagers are equipped with sophisticated image processing capabilities (image enhancement, moving target detection and tracking, image compression). Various commercial satellite services now provide relatively inexpensive long-range communications and the Internet provides fast worldwide access to the data.
Ultra-sensitive fluorescent imaging-biosensing using biological photonic crystals
NASA Astrophysics Data System (ADS)
Squire, Kenny; Kong, Xianming; Wu, Bo; Rorrer, Gregory; Wang, Alan X.
2018-02-01
Optical biosensing is a growing area of research known for its low limits of detection. Among optical sensing techniques, fluorescence detection is among the most established and prevalent. Fluorescence imaging is an optical biosensing modality that exploits the sensitivity of fluorescence in an easy-to-use process. Fluorescence imaging allows a user to place a sample on a sensor and use an imager, such as a camera, to collect the results. The image can then be processed to determine the presence of the analyte. Fluorescence imaging is appealing because it can be performed with as little as a light source, a camera and a data processor thus being ideal for nontrained personnel without any expensive equipment. Fluorescence imaging sensors generally employ an immunoassay procedure to selectively trap analytes such as antigens or antibodies. When the analyte is present, the sensor fluoresces thus transducing the chemical reaction into an optical signal capable of imaging. Enhancement of this fluorescence leads to an enhancement in the detection capabilities of the sensor. Diatoms are unicellular algae with a biosilica shell called a frustule. The frustule is porous with periodic nanopores making them biological photonic crystals. Additionally, the porous nature of the frustule allows for large surface area capable of multiple analyte binding sites. In this paper, we fabricate a diatom based ultra-sensitive fluorescence imaging biosensor capable of detecting the antibody mouse immunoglobulin down to a concentration of 1 nM. The measured signal has an enhancement of 6× when compared to sensors fabricated without diatoms.
Hardware-based image processing for high-speed inspection of grains
USDA-ARS?s Scientific Manuscript database
A high-speed, low-cost, image-based sorting device was developed to detect and separate grains with slight color differences and small defects on grains The device directly combines a complementary metal–oxide–semiconductor (CMOS) color image sensor with a field-programmable gate array (FPGA) which...
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability.
Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U
2015-03-06
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle.
An Ultra-Low Power CMOS Image Sensor with On-Chip Energy Harvesting and Power Management Capability
Cevik, Ismail; Huang, Xiwei; Yu, Hao; Yan, Mei; Ay, Suat U.
2015-01-01
An ultra-low power CMOS image sensor with on-chip energy harvesting and power management capability is introduced in this paper. The photodiode pixel array can not only capture images but also harvest solar energy. As such, the CMOS image sensor chip is able to switch between imaging and harvesting modes towards self-power operation. Moreover, an on-chip maximum power point tracking (MPPT)-based power management system (PMS) is designed for the dual-mode image sensor to further improve the energy efficiency. A new isolated P-well energy harvesting and imaging (EHI) pixel with very high fill factor is introduced. Several ultra-low power design techniques such as reset and select boosting techniques have been utilized to maintain a wide pixel dynamic range. The chip was designed and fabricated in a 1.8 V, 1P6M 0.18 µm CMOS process. Total power consumption of the imager is 6.53 µW for a 96 × 96 pixel array with 1 V supply and 5 fps frame rate. Up to 30 μW of power could be generated by the new EHI pixels. The PMS is capable of providing 3× the power required during imaging mode with 50% efficiency allowing energy autonomous operation with a 72.5% duty cycle. PMID:25756863
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk
2016-05-01
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
NASA Astrophysics Data System (ADS)
Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon
2008-02-01
This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander
2017-05-01
The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.
Microfabricated optically pumped magnetometer arrays for biomedical imaging
NASA Astrophysics Data System (ADS)
Perry, A. R.; Sheng, D.; Krzyzewski, S. P.; Geller, S.; Knappe, S.
2017-02-01
Optically-pumped magnetometers have demonstrated magnetic field measurements as precise as the best superconducting quantum interference device magnetometers. Our group develops miniature alkali atom-based magnetic sensors using microfabrication technology. Our sensors do not require cryogenic cooling, and can be positioned very close to the sample, making these sensors an attractive option for development in the medical community. We will present our latest chip-scale optically-pumped gradiometer developed for array applications to image magnetic fields from the brain noninvasively. These developments should lead to improved spatial resolution, and potentially sensitive measurements in unshielded environments.
Method and apparatus for distinguishing actual sparse events from sparse event false alarms
Spalding, Richard E.; Grotbeck, Carter L.
2000-01-01
Remote sensing method and apparatus wherein sparse optical events are distinguished from false events. "Ghost" images of actual optical phenomena are generated using an optical beam splitter and optics configured to direct split beams to a single sensor or segmented sensor. True optical signals are distinguished from false signals or noise based on whether the ghost image is presence or absent. The invention obviates the need for dual sensor systems to effect a false target detection capability, thus significantly reducing system complexity and cost.
Monitoring the long term stability of the IRS-P6 AWiFS sensor using the Sonoran and RVPN sites
NASA Astrophysics Data System (ADS)
Chander, Gyanesh; Sampath, Aparajithan; Angal, Amit; Choi, Taeyoung; Xiong, Xiaoxiong
2010-10-01
This paper focuses on radiometric and geometric assessment of the Indian Remote Sensing (IRS-P6) Advanced Wide Field Sensor (AWiFS) sensor using the Sonoran desert and Railroad Valley Playa, Nevada (RVPN) ground sites. Imageto- Image (I2I) accuracy and relative band-to-band (B2B) accuracy were measured. I2I accuracy of the AWiFS imagery was assessed by measuring the imagery against Landsat Global Land Survey (GLS) 2000. The AWiFS images were typically registered to within one pixel to the GLS 2000 mosaic images. The B2B process used the same concepts as the I2I, except instead of a reference image and a search image; the individual bands of a multispectral image are tested against each other. The B2B results showed that all the AWiFS multispectral bands are registered to sub-pixel accuracy. Using the limited amount of scenes available over these ground sites, the reflective bands of AWiFS sensor indicate a long-term drift in the top-of-atmosphere (TOA) reflectance. Because of the limited availability of AWiFS scenes over these ground sites, a comprehensive evaluation of the radiometric stability using these sites is not possible. In order to overcome this limitation, a cross-comparison between AWiFS and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) was performed using image statistics based on large common areas observed by the sensors within 30 minutes. Regression curves and coefficients of determination for the TOA trends from these sensors were generated to quantify the uncertainty in these relationships and to provide an assessment of the calibration differences between these sensors.
Learning receptor positions from imperfectly known motions
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1990-01-01
An algorithm is described for learning image interpolation functions for sensor arrays whose sensor positions are somewhat disordered. The learning is based on failures of translation invariance, so it does not require knowledge of the images being presented to the visual system. Previously reported implementations of the method assumed the visual system to have precise knowledge of the translations. It is demonstrated that translation estimates computed from the imperfectly interpolated images can have enough accuracy to allow the learning process to converge to a correct interpolation.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
Intelligent Network-Centric Sensors Development Program
2012-07-31
Image sensor Configuration: ; Cone 360 degree LWIR PFx Sensor: •■. Image sensor . Configuration: Image MWIR Configuration; Cone 360 degree... LWIR PFx Sensor: Video Configuration: Cone 360 degree SW1R, 2. Reasoning Process to Match Sensor Systems to Algorithms The ontological...effects of coherent imaging because of aberrations. Another reason is the specular nature of active imaging. Both contribute to the nonuniformity
Pc-based car license plate reading
NASA Astrophysics Data System (ADS)
Tanabe, Katsuyoshi; Marubayashi, Eisaku; Kawashima, Harumi; Nakanishi, Tadashi; Shio, Akio
1994-03-01
A PC-based car license plate recognition system has been developed. The system recognizes Chinese characters and Japanese phonetic hiragana characters as well as six digits on Japanese license plates. The system consists of a CCD camera, vehicle sensors, a strobe unit, a monitoring center, and an i486-based PC. The PC includes in its extension slots: a vehicle detector board, a strobe emitter board, and an image grabber board. When a passing vehicle is detected by the vehicle sensors, the strobe emits a pulse of light. The light pulse is synchronized with the time the vehicle image is frozen on an image grabber board. The recognition process is composed of three steps: image thresholding, character region extraction, and matching-based character recognition. The recognition software can handle obscured characters. Experimental results for hundreds of outdoor images showed high recognition performance within relatively short performance times. The results confirmed that the system is applicable to a wide variety of applications such as automatic vehicle identification and travel time measurement.
Landsat 7 thermal-IR image sharpening using an artificial neural network and sensor model
Lemeshewsky, G.P.; Schowengerdt, R.A.; ,
2001-01-01
The enhanced thematic mapper (plus) (ETM+) instrument on Landsat 7 shares the same basic design as the TM sensors on Landsats 4 and 5, with some significant improvements. In common are six multispectral bands with a 30-m ground-projected instantaneous field of view (GIFOV). However, the thermaL-IR (TIR) band now has a 60-m GIFOV, instead of 120-m. Also, a 15-m panchromatic band has been added. The artificial neural network (NN) image sharpening method described here uses data from the higher spatial resolution ETM+ bands to enhance (sharpen) the spatial resolution of the TIR imagery. It is based on an assumed correlation over multiple scales of resolution, between image edge contrast patterns in the TIR band and several other spectral bands. A multilayer, feedforward NN is trained to approximate TIR data at 60m, given degraded (from 30-m to 60-m) spatial resolution input from spectral bands 7, 5, and 2. After training, the NN output for full-resolution input generates an approximation of a TIR image at 30-m resolution. Two methods are used to degrade the spatial resolution of the imagery used for NN training, and the corresponding sharpening results are compared. One degradation method uses a published sensor transfer function (TF) for Landsat 5 to simulate sensor coarser resolution imagery from higher resolution imagery. For comparison, the second degradation method is simply Gaussian low pass filtering and subsampling, wherein the Gaussian filter approximates the full width at half maximum amplitude characteristics of the TF-based spatial filter. Two fixed-size NNs (that is, number of weights and processing elements) were trained separately with the degraded resolution data, and the sharpening results compared. The comparison evaluates the relative influence of the degradation technique employed and whether or not it is desirable to incorporate a sensor TF model. Preliminary results indicate some improvements for the sensor model-based technique. Further evaluation using a higher resolution reference image and strict application of sensor model to data is recommended.
Framework of passive millimeter-wave scene simulation based on material classification
NASA Astrophysics Data System (ADS)
Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun
2006-05-01
Over the past few decades, passive millimeter-wave (PMMW) sensors have emerged as useful implements in transportation and military applications such as autonomous flight-landing system, smart weapons, night- and all weather vision system. As an efficient way to predict the performance of a PMMW sensor and apply it to system, it is required to test in SoftWare-In-the-Loop (SWIL). The PMMW scene simulation is a key component for implementation of this simulator. However, there is no commercial on-the-shelf available to construct the PMMW scene simulation; only there have been a few studies on this technology. We have studied the PMMW scene simulation method to develop the PMMW sensor SWIL simulator. This paper describes the framework of the PMMW scene simulation and the tentative results. The purpose of the PMMW scene simulation is to generate sensor outputs (or image) from a visible image and environmental conditions. We organize it into four parts; material classification mapping, PMMW environmental setting, PMMW scene forming, and millimeter-wave (MMW) sensorworks. The background and the objects in the scene are classified based on properties related with MMW radiation and reflectivity. The environmental setting part calculates the following PMMW phenomenology; atmospheric propagation and emission including sky temperature, weather conditions, and physical temperature. Then, PMMW raw images are formed with surface geometry. Finally, PMMW sensor outputs are generated from PMMW raw images by applying the sensor characteristics such as an aperture size and noise level. Through the simulation process, PMMW phenomenology and sensor characteristics are simulated on the output scene. We have finished the design of framework of the simulator, and are working on implementation in detail. As a tentative result, the flight observation was simulated in specific conditions. After implementation details, we plan to increase the reliability of the simulation by data collecting using actual PMMW sensors. With the reliable PMMW scene simulator, it will be more efficient to apply the PMMW sensor to various applications.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
Single-exposure quantitative phase imaging in color-coded LED microscopy.
Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin
2017-04-03
We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.
Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror
2016-10-01
To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.
Miniaturized optical wavelength sensors
NASA Astrophysics Data System (ADS)
Kung, Helen Ling-Ning
Recently semiconductor processing technology has been applied to the miniaturization of optical wavelength sensors. Compact sensors enable new applications such as integrated diode-laser wavelength monitors and frequency lockers, portable chemical and biological detection, and portable and adaptive hyperspectral imaging arrays. Small sensing systems have trade-offs between resolution, operating range, throughput, multiplexing and complexity. We have developed a new wavelength sensing architecture that balances these parameters for applications involving hyperspectral imaging spectrometer arrays. In this thesis we discuss and demonstrate two new wavelength-sensing architectures whose single-pixel designs can easily be extended into spectrometer arrays. The first class of devices is based on sampling a standing wave. These devices are based on measuring the wavelength-dependent period of optical standing waves formed by the interference of forward and reflected waves at a mirror. We fabricated two different devices based on this principle. The first device is a wavelength monitor, which measures the wavelength and power of a monochromatic source. The second device is a spectrometer that can also act as a selective spectral coherence sensor. The spectrometer contains a large displacement piston-motion MEMS mirror and a thin GaAs photodiode flip-chip bonded to a quartz substrate. The performance of this spectrometer is similar to that of a Michelson in resolution, operating range, throughput and multiplexing but with the added advantages of fewer components and one-dimensional architecture. The second class of devices is based on the Talbot self-imaging effect. The Talbot effect occurs when a periodic object is illuminated with a spatially coherent wave. Periodically spaced self-images are formed behind the object. The spacing of the self-images is proportional to wavelength of the incident light. We discuss and demonstrate how this effect can be used for spectroscopy. In the conclusion we compare these two new miniaturized spectrometer architectures to existing miniaturized spectrometers. We believe that the combination of miniaturized wavelength sensors and smart processing should facilitate the development real-time, adaptive and portable sensing systems.
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-01
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-10
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.
NASA Astrophysics Data System (ADS)
Saeb Gilani, T.; Villringer, C.; Zhang, E.; Gundlach, H.; Buchmann, J.; Schrader, S.; Laufer, J.
2018-02-01
Tomographic photoacoustic (PA) images acquired using a Fabry-Perot (FP) based scanner offer high resolution and image fidelity but can result in long acquisition times due to the need for raster scanning. To reduce the acquisition times, a parallelised camera-based PA signal detection scheme is developed. The scheme is based on using a sCMOScamera and FPI sensors with high homogeneity of optical thickness. PA signals were acquired using the camera-based setup and the signal to noise ratio (SNR) was measured. A comparison of the SNR of PA signal detected using 1) a photodiode in a conventional raster scanning detection scheme and 2) a sCMOS camera in parallelised detection scheme is made. The results show that the parallelised interrogation scheme has the potential to provide high speed PA imaging.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Sensor Webs: Autonomous Rapid Response to Monitor Transient Science Events
NASA Technical Reports Server (NTRS)
Mandl, Dan; Grosvenor, Sandra; Frye, Stu; Sherwood, Robert; Chien, Steve; Davies, Ashley; Cichy, Ben; Ingram, Mary Ann; Langley, John; Miranda, Felix
2005-01-01
To better understand how physical phenomena, such as volcanic eruptions, evolve over time, multiple sensor observations over the duration of the event are required. Using sensor web approaches that integrate original detections by in-situ sensors and global-coverage, lower-resolution, on-orbit assets with automated rapid response observations from high resolution sensors, more observations of significant events can be made with increased temporal, spatial, and spectral resolution. This paper describes experiments using Earth Observing 1 (EO-1) along with other space and ground assets to implement progressive mission autonomy to identify, locate and image with high resolution instruments phenomena such as wildfires, volcanoes, floods and ice breakup. The software that plans, schedules and controls the various satellite assets are used to form ad hoc constellations which enable collaborative autonomous image collections triggered by transient phenomena. This software is both flight and ground based and works in concert to run all of the required assets cohesively and includes software that is model-based, artificial intelligence software.
Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V
2015-02-01
Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.
NASA Astrophysics Data System (ADS)
Rizk, Charbel G.; Lin, Joseph H.; Kennerly, Stephen W.; Pouliquen, Philippe; Goldberg, Arnold C.; Andreou, Andreas G.
2012-06-01
The advanced imagers team at JHU APL and ECE has been advocating and developing a new class of sensor systems that address key system level performance bottlenecks but are sufficiently flexible to allow optimization of associated cost and size, weight, and power (SWaP) for different applications and missions. A primary component of this approach is the innovative system-on-chip architecture: Flexible Readout and Integration Sensors (FRIS). This paper reports on the development and testing of a prototype based on the FRIS concept. It will include the architecture, a summary of test results to date relevant to the hostile fire detection challenge. For this application, this prototype demonstrates the potential for this concept to yield the smallest SWaP and lowest cost imaging solution with a low false alarm rate. In addition, a specific solution based on the visible band is proposed. Similar performance and SWaP gains are expected for other wavebands such as SWIR, MWIR, and LWIR and/or other applications like persistent surveillance for critical infrastructure and border control in addition to unattended sensors.
Finite element model for MOI applications using A-V formulation
NASA Astrophysics Data System (ADS)
Xuan, L.; Shanker, B.; Udpa, L.; Shih, W.; Fitzpatrick, G.
2001-04-01
Magneto-optic imaging (MOI) is a relatively new sensor application of an extension of bubble memory technology to NDT and produce easy-to-interpret, real time analog images. MOI systems use a magneto-optic (MO) sensor to produce analog images of magnetic flux leakage from surface and subsurface defects. The instrument's capability in detecting the relatively weak magnetic fields associated with subsurface defects depends on the sensitivity of the magneto-optic sensor. The availability of a theoretical model that can simulate the MOI system performance is extremely important for optimization of the MOI sensor and hardware system. A nodal finite element model based on magnetic vector potential formulation has been developed for simulating MOI phenomenon. This model has been used for predicting the magnetic fields in simple test geometry with corrosion dome defects. In the case of test samples with multiple discontinuities, a more robust model using the magnetic vector potential Ā and electrical scalar potential V is required. In this paper, a finite element model based on A-V formulation is developed to model complex circumferential crack under aluminum rivets in dimpled countersink.
Research-grade CMOS image sensors for demanding space applications
NASA Astrophysics Data System (ADS)
Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre
2004-06-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.
Research-grade CMOS image sensors for demanding space applications
NASA Astrophysics Data System (ADS)
Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre
2017-11-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.
Along-Track Reef Imaging System (ATRIS)
Brock, John; Zawada, Dave
2006-01-01
"Along-Track Reef Imaging System (ATRIS)" describes the U.S. Geological Survey's Along-Track Reef Imaging System, a boat-based sensor package for rapidly mapping shallow water benthic environments. ATRIS acquires high resolution, color digital images that are accurately geo-located in real-time.
Luminescent sensing and imaging of oxygen: fierce competition to the Clark electrode.
Wolfbeis, Otto S
2015-08-01
Luminescence-based sensing schemes for oxygen have experienced a fast growth and are in the process of replacing the Clark electrode in many fields. Unlike electrodes, sensing is not limited to point measurements via fiber optic microsensors, but includes additional features such as planar sensing, imaging, and intracellular assays using nanosized sensor particles. In this essay, I review and discuss the essentials of (i) common solid-state sensor approaches based on the use of luminescent indicator dyes and host polymers; (ii) fiber optic and planar sensing schemes; (iii) nanoparticle-based intracellular sensing; and (iv) common spectroscopies. Optical sensors are also capable of multiple simultaneous sensing (such as O2 and temperature). Sensors for O2 are produced nowadays in large quantities in industry. Fields of application include sensing of O2 in plant and animal physiology, in clinical chemistry, in marine sciences, in the chemical industry and in process biotechnology. © 2015 The Author. Bioessays published by WILEY Periodicals, Inc.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na
2014-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935
a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors
NASA Astrophysics Data System (ADS)
Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.
2018-04-01
Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.
Retina-like sensor image coordinates transformation and display
NASA Astrophysics Data System (ADS)
Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu
2015-03-01
For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.
Distance-Dependent Multimodal Image Registration for Agriculture Tasks
Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad
2015-01-01
Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000
Non-contact capacitance based image sensing method and system
Novak, J.L.; Wiczer, J.J.
1994-01-25
A system and a method for imaging desired surfaces of a workpiece is described. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.
Non-contact capacitance based image sensing method and system
Novak, J.L.; Wiczer, J.J.
1995-01-03
A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-05-02
Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-01-01
Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components. PMID:28468324
Preliminary investigations of active pixel sensors in Nuclear Medicine imaging
NASA Astrophysics Data System (ADS)
Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.
2009-06-01
Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.
Lensless high-resolution photoacoustic imaging scanner for in vivo skin imaging
NASA Astrophysics Data System (ADS)
Ida, Taiichiro; Iwazaki, Hideaki; Omuro, Toshiyuki; Kawaguchi, Yasushi; Tsunoi, Yasuyuki; Kawauchi, Satoko; Sato, Shunichi
2018-02-01
We previously launched a high-resolution photoacoustic (PA) imaging scanner based on a unique lensless design for in vivo skin imaging. The design, imaging algorithm and characteristics of the system are described in this paper. Neither an optical lens nor an acoustic lens is used in the system. In the imaging head, four sensor elements are arranged quadrilaterally, and by checking the phase differences for PA waves detected with these four sensors, a set of PA signals only originating from a chromophore located on the sensor center axis is extracted for constructing an image. A phantom study using a carbon fiber showed a depth-independent horizontal resolution of 84.0 ± 3.5 µm, and the scan direction-dependent variation of PA signals was about ± 20%. We then performed imaging of vasculature phantoms: patterns of red ink lines with widths of 100 or 200 μm formed in an acrylic block co-polymer. The patterns were visualized with high contrast, showing the capability for imaging arterioles and venues in the skin. Vasculatures in rat burn models and healthy human skin were also clearly visualized in vivo.
3D space positioning and image feature extraction for workpiece
NASA Astrophysics Data System (ADS)
Ye, Bing; Hu, Yi
2008-03-01
An optical system of 3D parameters measurement for specific area of a workpiece has been presented and discussed in this paper. A number of the CCD image sensors are employed to construct the 3D coordinate system for the measured area. The CCD image sensor of the monitoring target is used to lock the measured workpiece when it enters the field of view. The other sensors, which are placed symmetrically beam scanners, measure the appearance of the workpiece and the characteristic parameters. The paper established target image segmentation and the image feature extraction algorithm to lock the target, based on the geometric similarity of objective characteristics, rapid locking the goal can be realized. When line laser beam scan the tested workpiece, a number of images are extracted equal time interval and the overlapping images are processed to complete image reconstruction, and achieve the 3D image information. From the 3D coordinate reconstruction model, the 3D characteristic parameters of the tested workpiece are gained. The experimental results are provided in the paper.
Performance study of double SOI image sensors
NASA Astrophysics Data System (ADS)
Miyoshi, T.; Arai, Y.; Fujita, Y.; Hamasaki, R.; Hara, K.; Ikegami, Y.; Kurachi, I.; Nishimura, R.; Ono, S.; Tauchi, K.; Tsuboyama, T.; Yamada, M.
2018-02-01
Double silicon-on-insulator (DSOI) sensors composed of two thin silicon layers and one thick silicon layer have been developed since 2011. The thick substrate consists of high resistivity silicon with p-n junctions while the thin layers are used as SOI-CMOS circuitry and as shielding to reduce the back-gate effect and crosstalk between the sensor and the circuitry. In 2014, a high-resolution integration-type pixel sensor, INTPIX8, was developed based on the DSOI concept. This device is fabricated using a Czochralski p-type (Cz-p) substrate in contrast to a single SOI (SSOI) device having a single thin silicon layer and a Float Zone p-type (FZ-p) substrate. In the present work, X-ray spectra of both DSOI and SSOI sensors were obtained using an Am-241 radiation source at four gain settings. The gain of the DSOI sensor was found to be approximately three times that of the SSOI device because the coupling capacitance is reduced by the DSOI structure. An X-ray imaging demonstration was also performed and high spatial resolution X-ray images were obtained.
Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.
Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao
2017-04-11
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.
Cross calibration of the Landsat-7 ETM+ and EO-1 ALI sensor
Chander, G.; Meyer, D.J.; Helder, D.L.
2004-01-01
As part of the Earth Observer 1 (EO-1) Mission, the Advanced Land Imager (ALI) demonstrates a potential technological direction for Landsat Data Continuity Missions. To evaluate ALI's capabilities in this role, a cross-calibration methodology has been developed using image pairs from the Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) and EO-1 (ALI) to verify the radiometric calibration of ALI with respect to the well-calibrated L7 ETM+ sensor. Results have been obtained using two different approaches. The first approach involves calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors. The second approach uses vicarious calibration techniques to compare the predicted top-of-atmosphere radiance derived from ground reference data collected during the overpass to the measured radiance obtained from the sensor. The results indicate that the relative sensor chip assemblies gains agree with the ETM+ visible and near-infrared bands to within 2% and the shortwave infrared bands to within 4%.
Imaging sensor constellation for tomographic chemical cloud mapping.
Cosofret, Bogdan R; Konno, Daisei; Faghfouri, Aram; Kindle, Harry S; Gittins, Christopher M; Finson, Michael L; Janov, Tracy E; Levreault, Mark J; Miyashiro, Rex K; Marinelli, William J
2009-04-01
A sensor constellation capable of determining the location and detailed concentration distribution of chemical warfare agent simulant clouds has been developed and demonstrated on government test ranges. The constellation is based on the use of standoff passive multispectral infrared imaging sensors to make column density measurements through the chemical cloud from two or more locations around its periphery. A computed tomography inversion method is employed to produce a 3D concentration profile of the cloud from the 2D line density measurements. We discuss the theoretical basis of the approach and present results of recent field experiments where controlled releases of chemical warfare agent simulants were simultaneously viewed by three chemical imaging sensors. Systematic investigations of the algorithm using synthetic data indicate that for complex functions, 3D reconstruction errors are less than 20% even in the case of a limited three-sensor measurement network. Field data results demonstrate the capability of the constellation to determine 3D concentration profiles that account for ~?86%? of the total known mass of material released.
A Low-Power High-Speed Smart Sensor Design for Space Exploration Missions
NASA Technical Reports Server (NTRS)
Fang, Wai-Chi
1997-01-01
A low-power high-speed smart sensor system based on a large format active pixel sensor (APS) integrated with a programmable neural processor for space exploration missions is presented. The concept of building an advanced smart sensing system is demonstrated by a system-level microchip design that is composed with an APS sensor, a programmable neural processor, and an embedded microprocessor in a SOI CMOS technology. This ultra-fast smart sensor system-on-a-chip design mimics what is inherent in biological vision systems. Moreover, it is programmable and capable of performing ultra-fast machine vision processing in all levels such as image acquisition, image fusion, image analysis, scene interpretation, and control functions. The system provides about one tera-operation-per-second computing power which is a two order-of-magnitude increase over that of state-of-the-art microcomputers. Its high performance is due to massively parallel computing structures, high data throughput rates, fast learning capabilities, and advanced VLSI system-on-a-chip implementation.
Chander, G.; Xiong, X.; Angal, A.; Choi, T.
2009-01-01
The Committee on Earth Observation Satellites (CEOS) Infrared and Visible Optical Sensors (IVOS) subgroup members established a set of CEOS-endorsed globally distributed reference standard test sites for the postlaunch calibration of space-based optical imaging sensors. This paper discusses the top five African pseudo-invariant sites (Libya 4, Mauritania 1/2, Algeria 3, Libya 1, and Algeria 5) that were identified by the IVOS subgroup. This paper focuses on monitoring the long-term radiometric stability of the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors using near-simultaneous and cloud-free image pairs acquired from launch to December 2008 over the five African desert sites. Residual errors and coefficients of determination were also generated to support the quality assessment of the calibration differences between the two sensors. An effort was also made to evaluate the relative stability of these sites for long-term monitoring of the optical sensors. ??2009 IEEE.
Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor
NASA Astrophysics Data System (ADS)
Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso
2018-04-01
Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Light-Addressable Potentiometric Sensors for Quantitative Spatial Imaging of Chemical Species.
Yoshinobu, Tatsuo; Miyamoto, Ko-Ichiro; Werner, Carl Frederik; Poghossian, Arshak; Wagner, Torsten; Schöning, Michael J
2017-06-12
A light-addressable potentiometric sensor (LAPS) is a semiconductor-based chemical sensor, in which a measurement site on the sensing surface is defined by illumination. This light addressability can be applied to visualize the spatial distribution of pH or the concentration of a specific chemical species, with potential applications in the fields of chemistry, materials science, biology, and medicine. In this review, the features of this chemical imaging sensor technology are compared with those of other technologies. Instrumentation, principles of operation, and various measurement modes of chemical imaging sensor systems are described. The review discusses and summarizes state-of-the-art technologies, especially with regard to the spatial resolution and measurement speed; for example, a high spatial resolution in a submicron range and a readout speed in the range of several tens of thousands of pixels per second have been achieved with the LAPS. The possibility of combining this technology with microfluidic devices and other potential future developments are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Edward Z.; Laufer, Jan; Beard, Paul
2007-02-01
A 3D photoacoustic imaging instrument for characterising small animal models of human disease processes has been developed. The system comprises an OPO excitation source and a backward-mode planar ultrasound imaging head based upon a Fabry Perot polymer film sensing interferometer (FPI). The mirrors of the latter are transparent between 590 - 1200nm but highly reflective between 1500-1600nm. This enables nanosecond excitation laser pulses in the former wavelength range, where biological tissues are relatively transparent, to be transmitted through the sensor head into the tissue. The resulting photoacoustic signals arrive at the sensor where they modulate the optical thickness of the FPI and therefore its reflectivity. By scanning a CW focused interrogating laser beam at 1550nm across the surface of the sensor, the spatial-temporal distribution of the photoacoustic signals can therefore be mapped in 2D enabling a 3D photoacoustic image to be reconstructed. To demonstrate the application of the system to imaging small animals such as mice, 3D images of the vascular anatomy of the mouse brain and the microvasculature in the skin around the abdomen were obtained non invasively. It is considered that this system provides a practical alternative to photoacoustic scanners based upon piezoelectric detectors for high resolution non invasive small animal imaging.
Lensless transport-of-intensity phase microscopy and tomography with a color LED matrix
NASA Astrophysics Data System (ADS)
Zuo, Chao; Sun, Jiasong; Zhang, Jialin; Hu, Yan; Chen, Qian
2015-07-01
We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ˜3.7μm and an axial resolution of ˜5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.
Determination of technical readiness for an atmospheric carbon imaging spectrometer
NASA Astrophysics Data System (ADS)
Mobilia, Joseph; Kumer, John B.; Palmer, Alice; Sawyer, Kevin; Mao, Yalan; Katz, Noah; Mix, Jack; Nast, Ted; Clark, Charles S.; Vanbezooijen, Roel; Magoncelli, Antonio; Baraze, Ronald A.; Chenette, David L.
2013-09-01
The geoCARB sensor uses a 4-channel push broom slit-scan infrared imaging grating spectrometer to measure the absorption spectra of sunlight reflected from the ground in narrow wavelength regions. The instrument is designed for flight at geostationary orbit to provide mapping of greenhouse gases over continental scales, several times per day, with a spatial resolution of a few kilometers. The sensor provides multiple daily maps of column-averaged mixing ratios of CO2, CH4, and CO over the regions of interest, which enables flux determination at unprecedented time, space, and accuracy scales. The geoCARB sensor development is based on our experience in successful implementation of advanced space deployed optical instruments for remote sensing. A few recent examples include the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) on the geostationary Solar Dynamics Observatory (SDO), the Space Based Infrared System (SBIRS GEO-1) and the Interface Region Imaging Spectrograph (IRIS), along with sensors under development, the Near Infared camera (NIRCam) for James Webb (JWST), and the Global Lightning Mapper (GLM) and Solar UltraViolet Imager (SUVI) for the GOES-R series. The Tropospheric Infrared Mapping Spectrometer (TIMS), developed in part through the NASA Instrument Incubator Program (IIP), provides an important part of the strong technological foundation for geoCARB. The paper discusses subsystem heritage and technology readiness levels for these subsystems. The system level flight technology readiness and methods used to determine this level are presented along with plans to enhance the level.
Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R
2016-10-06
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.
NASA Astrophysics Data System (ADS)
Fong de Los Santos, Luis E.
Development of a scanning superconducting quantum interference device (SQUID) microscope system with interchangeable sensor configurations for imaging magnetic fields of room-temperature (RT) samples with sub-millimeter resolution. The low-critical-temperature (Tc) niobium-based monolithic SQUID sensor is mounted in the tip of a sapphire rod and thermally anchored to the cryostat helium reservoir. A 25 mum sapphire window separates the vacuum space from the RT sample. A positioning mechanism allows adjusting the sample-to-sensor spacing from the top of the Dewar. I have achieved a sensor-to-sample spacing of 100 mum, which could be maintained for periods of up to 4 weeks. Different SQUID sensor configurations are necessary to achieve the best combination of spatial resolution and field sensitivity for a given magnetic source. For imaging thin sections of geological samples, I used a custom-designed monolithic low-Tc niobium bare SQUID sensor, with an effective diameter of 80 mum, and achieved a field sensitivity of 1.5 pT/Hz1/2 and a magnetic moment sensitivity of 5.4 x 10-18 Am2/Hz1/2 at a sensor-to-sample spacing of 100 mum in the white noise region for frequencies above 100 Hz. Imaging action currents in cardiac tissue requires higher field sensitivity, which can only be achieved by compromising spatial resolution. I developed a monolithic low-Tc niobium multiloop SQUID sensor, with sensor sizes ranging from 250 mum to 1 mm, and achieved sensitivities of 480 - 180 fT/Hz1/2 in the white noise region for frequencies above 100 Hz, respectively. For all sensor configurations, the spatial resolution was comparable to the effective diameter and limited by the sensor-to-sample spacing. Spatial registration allowed us to compare high-resolution images of magnetic fields associated with action currents and optical recordings of transmembrane potentials to study the bidomain nature of cardiac tissue or to match petrography to magnetic field maps in thin sections of geological samples.
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
Knowledge-based imaging-sensor fusion system
NASA Technical Reports Server (NTRS)
Westrom, George
1989-01-01
An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.
NASA Astrophysics Data System (ADS)
Guo, C.; Tong, X.; Liu, S.; Liu, S.; Lu, X.; Chen, P.; Jin, Y.; Xie, H.
2017-07-01
Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite's attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF) and Unscented Kalman Filter (UKF). In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.
Fiber Optic Force Sensors for MRI-Guided Interventions and Rehabilitation: A Review
Iordachita, Iulian I.; Tokuda, Junichi; Hata, Nobuhiko; Liu, Xuan; Seifabadi, Reza; Xu, Sheng; Wood, Bradford; Fischer, Gregory S.
2017-01-01
Magnetic Resonance Imaging (MRI) provides both anatomical imaging with excellent soft tissue contrast and functional MRI imaging (fMRI) of physiological parameters. The last two decades have witnessed the manifestation of increased interest in MRI-guided minimally invasive intervention procedures and fMRI for rehabilitation and neuroscience research. Accompanying the aspiration to utilize MRI to provide imaging feedback during interventions and brain activity for neuroscience study, there is an accumulated effort to utilize force sensors compatible with the MRI environment to meet the growing demand of these procedures, with the goal of enhanced interventional safety and accuracy, improved efficacy and rehabilitation outcome. This paper summarizes the fundamental principles, the state of the art development and challenges of fiber optic force sensors for MRI-guided interventions and rehabilitation. It provides an overview of MRI-compatible fiber optic force sensors based on different sensing principles, including light intensity modulation, wavelength modulation, and phase modulation. Extensive design prototypes are reviewed to illustrate the detailed implementation of these principles. Advantages and disadvantages of the sensor designs are compared and analyzed. A perspective on the future development of fiber optic sensors is also presented which may have additional broad clinical applications. Future surgical interventions or rehabilitation will rely on intelligent force sensors to provide situational awareness to augment or complement human perception in these procedures. PMID:28652857
Distributed processing method for arbitrary view generation in camera sensor network
NASA Astrophysics Data System (ADS)
Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki
2003-05-01
Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
Surveillance and reconnaissance ground system architecture
NASA Astrophysics Data System (ADS)
Devambez, Francois
2001-12-01
Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorum, O.H.; Hoover, A.; Jones, J.P.
This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of viewmore » and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.« less
Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models
Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir
2016-01-01
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570
Measurement of charge transfer potential barrier in pinned photodiode CMOS image sensors
NASA Astrophysics Data System (ADS)
Chen, Cao; Bing, Zhang; Junfeng, Wang; Longsheng, Wu
2016-05-01
The charge transfer potential barrier (CTPB) formed beneath the transfer gate causes a noticeable image lag issue in pinned photodiode (PPD) CMOS image sensors (CIS), and is difficult to measure straightforwardly since it is embedded inside the device. From an understanding of the CTPB formation mechanism, we report on an alternative method to feasibly measure the CTPB height by performing a linear extrapolation coupled with a horizontal left-shift on the sensor photoresponse curve under the steady-state illumination. The theoretical study was performed in detail on the principle of the proposed method. Application of the measurements on a prototype PPD-CIS chip with an array of 160 × 160 pixels is demonstrated. Such a method intends to shine new light on the guidance for the lag-free and high-speed sensors optimization based on PPD devices. Project supported by the National Defense Pre-Research Foundation of China (No. 51311050301095).
Gyrocopter-Based Remote Sensing Platform
NASA Astrophysics Data System (ADS)
Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.
2015-04-01
In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.
Architecture and applications of a high resolution gated SPAD image sensor
Burri, Samuel; Maruyama, Yuki; Michalet, Xavier; Regazzoni, Francesco; Bruschini, Claudio; Charbon, Edoardo
2014-01-01
We present the architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date. The sensor, fabricated in a high-voltage CMOS process, has a resolution of 512 × 128 pixels and a pitch of 24 μm. The fill-factor of 5% can be increased to 30% with the use of microlenses. For precise control of the exposure and for time-resolved imaging, we use fast global gating signals to define exposure windows as small as 4 ns. The uniformity of the gate edges location is ∼140 ps (FWHM) over the whole array, while in-pixel digital counting enables frame rates as high as 156 kfps. Currently, our camera is used as a highly sensitive sensor with high temporal resolution, for applications ranging from fluorescence lifetime measurements to fluorescence correlation spectroscopy and generation of true random numbers. PMID:25090572
An embedded multi-core parallel model for real-time stereo imaging
NASA Astrophysics Data System (ADS)
He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu
2018-04-01
The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.
Zhang, Geng; Wang, Shuang; Li, Libo; Hu, Xiuqing; Hu, Bingliang
2016-11-01
The lunar spectrum has been used in radiometric calibration and sensor stability monitoring for spaceborne optical sensors. A ground-based large-aperture static image spectrometer (LASIS) can be used to acquire the lunar spectral image for lunar radiance model improvement when the moon orbits over its viewing field. The lunar orbiting behavior is not consistent with the desired scanning speed and direction of LASIS. To correctly extract interferograms from the obtained data, a translation correction method based on image correlation is proposed. This method registers the frames to a reference frame to reduce accumulative errors. Furthermore, we propose a circle-matching-based approach to achieve even higher accuracy during observation of the full moon. To demonstrate the effectiveness of our approaches, experiments are run on true lunar observation data. The results show that the proposed approaches outperform the state-of-the-art methods.
A Focusing Method in the Calibration Process of Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro, José L.; Gardel, Alfredo; Cano, Ángel E.; Bravo, Ignacio
2010-01-01
A focusing procedure in the calibration process of image sensors based on Incoherent Optical Fiber Bundles (IOFBs) is described using the information extracted from fibers. These procedures differ from any other currently known focusing method due to the non spatial in-out correspondence between fibers, which produces a natural codification of the image to transmit. Focus measuring is essential prior to carrying out calibration in order to guarantee accurate processing and decoding. Four algorithms have been developed to estimate the focus measure; two methods based on mean grey level, and the other two based on variance. In this paper, a few simple focus measures are defined and compared. Some experimental results referred to the focus measure and the accuracy of the developed methods are discussed in order to demonstrate its effectiveness. PMID:22315526
Wavefront sensor-driven variable-geometry pupil for ground-based aperture synthesis imaging
NASA Astrophysics Data System (ADS)
Tyler, David W.
2000-07-01
I describe a variable-geometry pupil (VGP) to increase image resolution for ground-based near-IR and optical imaging. In this scheme, a curvature-type wavefront sensor provides an estimate of the wavefront curvature to the controller of a high-resolution spatial light modulator (SLM) or micro- electromechanical (MEM) mirror, positioned at an image of the telescope pupil. This optical element, the VGP, passes or reflects the incident beam only where the wavefront phase is sufficiently smooth, viz., where the curvature is sufficiently low. Using a computer simulation, I show the VGP can sharpen and smooth the long-exposure PSF and increase the OTF SNR for tilt-only and low-order AO systems, allowing higher resolution and more stable deconvolution with dimmer AO guidestars.
Leinders, S M; Westerveld, W J; Pozo, J; van Neer, P L M J; Snyder, B; O'Brien, P; Urbach, H P; de Jong, N; Verweij, M D
2015-09-22
With the increasing use of ultrasonography, especially in medical imaging, novel fabrication techniques together with novel sensor designs are needed to meet the requirements for future applications like three-dimensional intercardiac and intravascular imaging. These applications require arrays of many small elements to selectively record the sound waves coming from a certain direction. Here we present proof of concept of an optical micro-machined ultrasound sensor (OMUS) fabricated with a semi-industrial CMOS fabrication line. The sensor is based on integrated photonics, which allows for elements with small spatial footprint. We demonstrate that the first prototype is already capable of detecting pressures of 0.4 Pa, which matches the performance of the state of the art piezo-electric transducers while having a 65 times smaller spatial footprint. The sensor is compatible with MRI due to the lack of electronical wiring. Another important benefit of the use of integrated photonics is the easy interrogation of an array of elements. Hence, in future designs only two optical fibers are needed to interrogate an entire array, which minimizes the amount of connections of smart catheters. The demonstrated OMUS has potential applications in medical ultrasound imaging, non destructive testing as well as in flow sensing.
Sensor image prediction techniques
NASA Astrophysics Data System (ADS)
Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.
1981-02-01
The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.
2008-01-01
Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...pallet Airborne EO/IR and radar sensors VNIR through SWIR hyperspectral systems VNIR, MWIR, and LWIR high-resolution sys- tems Wideband SAR systems...meteorological sensors Hyperspectral sensor systems (PHILLS) Mid-wave infrared (MWIR) Indium Antimonide (InSb) imaging system Long-wave infrared ( LWIR
NASA Astrophysics Data System (ADS)
Wei, Minsong; Xing, Fei; You, Zheng
2017-01-01
The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.
An Illumination-Adaptive Colorimetric Measurement Using Color Image Sensor
NASA Astrophysics Data System (ADS)
Lee, Sung-Hak; Lee, Jong-Hyub; Sohng, Kyu-Ik
An image sensor for a use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between RGB output signals and CIE XYZ tristimulus values. This paper proposes an adaptive measuring method to obtain the chromaticity of colored scenes and illumination through a 3×3 camera transfer matrix under a certain illuminant. Camera RGB outputs, sensor status values, and photoelectric characteristic are used to obtain the chromaticity. Experimental results show that the proposed method is valid in the measuring performance.
Pagoulatos, N; Edwards, W S; Haynor, D R; Kim, Y
1999-12-01
The use of stereotactic systems has been one of the main approaches for image-based guidance of the surgical tool within the brain. The main limitation of stereotactic systems is that they are based on preoperative images that might become outdated and invalid during the course of surgery. Ultrasound (US) is considered the most practical and cost-effective intraoperative imaging modality, but US images inherently have a low signal-to-noise ratio. Integrating intraoperative US with stereotactic systems has recently been attempted. In this paper, we present a new system for interactively registering two-dimensional US and three-dimensional magnetic resonance (MR) images. This registration is based on tracking the US probe with a dc magnetic position sensor. We have performed an extensive analysis of the errors of our system by using a custom-built phantom. The registration error between the MR and the position sensor space was found to have a mean value of 1.78 mm and a standard deviation of 0.18 mm. The registration error between US and MR space was dependent on the distance of the target point from the US probe face. For a 3.5-MHz phased one-dimensional array transducer and a depth of 6 cm, the mean value of the registration error was 2.00 mm and the standard deviation was 0.75 mm. The registered MR images were reconstructed using either zeroth-order or first-order interpolation. The ease of use and the interactive nature of our system (approximately 6.5 frames/s for 344 x 310 images and first-order interpolation on a Pentium II 450 MHz) demonstrates its potential to be used in the operating room.
A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.
Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin
2015-09-16
In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.
New false color mapping for image fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Walraven, Jan
1996-03-01
A pixel-based color-mapping algorithm is presented that produces a fused false color rendering of two gray-level images representing different sensor modalities. The resulting images have a higher information content than each of the original images and retain sensor-specific image information. The unique component of each image modality is enhanced in the resulting fused color image representation. First, the common component of the two original input images is determined. Second, the common component is subtracted from the original images to obtain the unique component of each image. Third, the unique component of each image modality is subtracted from the image of the other modality. This step serves to enhance the representation of sensor-specific details in the final fused result. Finally, a fused color image is produced by displaying the images resulting from the last step through, respectively, the red and green channels of a color display. The method is applied to fuse thermal and visual images. The results show that the color mapping enhances the visibility of certain details and preserves the specificity of the sensor information. The fused images also have a fairly natural appearance. The fusion scheme involves only operations on corresponding pixels. The resolution of a fused image is therefore directly related to the resolution of the input images. Before fusing, the contrast of the images can be enhanced and their noise can be reduced by standard image- processing techniques. The color mapping algorithm is computationally simple. This implies that the investigated approaches can eventually be applied in real time and that the hardware needed is not too complicated or too voluminous (an important consideration when it has to fit in an airplane, for instance).
BIOME: An Ecosystem Remote Sensor Based on Imaging Interferometry
NASA Technical Reports Server (NTRS)
Peterson, David L.; Hammer, Philip; Smith, William H.; Lawless, James G. (Technical Monitor)
1994-01-01
Until recent times, optical remote sensing of ecosystem properties from space has been limited to broad band multispectral scanners such as Landsat and AVHRR. While these sensor data can be used to derive important information about ecosystem parameters, they are very limited for measuring key biogeochemical cycling parameters such as the chemical content of plant canopies. Such parameters, for example the lignin and nitrogen contents, are potentially amenable to measurements by very high spectral resolution instruments using a spectroscopic approach. Airborne sensors based on grating imaging spectrometers gave the first promise of such potential but the recent decision not to deploy the space version has left the community without many alternatives. In the past few years, advancements in high performance deep well digital sensor arrays coupled with a patented design for a two-beam interferometer has produced an entirely new design for acquiring imaging spectroscopic data at the signal to noise levels necessary for quantitatively estimating chemical composition (1000:1 at 2 microns). This design has been assembled as a laboratory instrument and the principles demonstrated for acquiring remote scenes. An airborne instrument is in production and spaceborne sensors being proposed. The instrument is extremely promising because of its low cost, lower power requirements, very low weight, simplicity (no moving parts), and high performance. For these reasons, we have called it the first instrument optimized for ecosystem studies as part of a Biological Imaging and Observation Mission to Earth (BIOME).
3D imaging of translucent media with a plenoptic sensor based on phase space optics
NASA Astrophysics Data System (ADS)
Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun
2015-05-01
Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.
Ah Lee, Seung; Ou, Xiaoze; Lee, J Eugene; Yang, Changhuei
2013-06-01
We demonstrate a silo-filter (SF) complementary metal-oxide semiconductor (CMOS) image sensor for a chip-scale fluorescence microscope. The extruded pixel design with metal walls between neighboring pixels guides fluorescence emission through the thick absorptive filter to the photodiode of a pixel. Our prototype device achieves 13 μm resolution over a wide field of view (4.8 mm × 4.4 mm). We demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.
NASA Astrophysics Data System (ADS)
Cheong, M. K.; Bahiki, M. R.; Azrad, S.
2016-10-01
The main goal of this study is to demonstrate the approach of achieving collision avoidance on Quadrotor Unmanned Aerial Vehicle (QUAV) using image sensors with colour- based tracking method. A pair of high definition (HD) stereo cameras were chosen as the stereo vision sensor to obtain depth data from flat object surfaces. Laser transmitter was utilized to project high contrast tracking spot for depth calculation using common triangulation. Stereo vision algorithm was developed to acquire the distance from tracked point to QUAV and the control algorithm was designed to manipulate QUAV's response based on depth calculated. Attitude and position controller were designed using the non-linear model with the help of Optitrack motion tracking system. A number of collision avoidance flight tests were carried out to validate the performance of the stereo vision and control algorithm based on image sensors. In the results, the UAV was able to hover with fairly good accuracy in both static and dynamic collision avoidance for short range collision avoidance. Collision avoidance performance of the UAV was better with obstacle of dull surfaces in comparison to shiny surfaces. The minimum collision avoidance distance achievable was 0.4 m. The approach was suitable to be applied in short range collision avoidance.
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor.
Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong
2016-12-29
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications.
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor
Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong
2016-01-01
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications. PMID:28036073
The application of Fresnel zone plate based projection in optofluidic microscopy.
Wu, Jigang; Cui, Xiquan; Lee, Lap Man; Yang, Changhuei
2008-09-29
Optofluidic microscopy (OFM) is a novel technique for low-cost, high-resolution on-chip microscopy imaging. In this paper we report the use of the Fresnel zone plate (FZP) based projection in OFM as a cost-effective and compact means for projecting the transmission through an OFM's aperture array onto a sensor grid. We demonstrate this approach by employing a FZP (diameter = 255 microm, focal length = 800 microm) that has been patterned onto a glass slide to project the transmission from an array of apertures (diameter = 1 microm, separation = 10 microm) onto a CMOS sensor. We are able to resolve the contributions from 44 apertures on the sensor under the illumination from a HeNe laser (wavelength = 633 nm). The imaging quality of the FZP determines the effective field-of-view (related to the number of resolvable transmissions from apertures) but not the image resolution of such an OFM system--a key distinction from conventional microscope systems. We demonstrate the capability of the integrated system by flowing the protist Euglena gracilis across the aperture array microfluidically and performing OFM imaging of the samples.
Mattioli Della Rocca, Francescopaolo
2018-01-01
This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479
Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation
NASA Astrophysics Data System (ADS)
Bourree, Loig E.
2014-05-01
Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-11-26
This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.
End-To-End performance test of the LINC-NIRVANA Wavefront-Sensor system.
NASA Astrophysics Data System (ADS)
Berwein, Juergen; Bertram, Thomas; Conrad, Al; Briegel, Florian; Kittmann, Frank; Zhang, Xiangyu; Mohr, Lars
2011-09-01
LINC-NIRVANA is an imaging Fizeau interferometer, for use in near infrared wavelengths, being built for the Large Binocular Telescope. Multi-conjugate adaptive optics (MCAO) increases the sky coverage and the field of view over which diffraction limited images can be obtained. For its MCAO implementation, Linc-Nirvana utilizes four total wavefront sensors; each of the two beams is corrected by both a ground-layer wavefront sensor (GWS) and a high-layer wavefront sensor (HWS). The GWS controls the adaptive secondary deformable mirror (DM), which is based on an DSP slope computing unit. Whereas the HWS controls an internal DM via computations provided by an off-the-shelf multi-core Linux system. Using wavefront sensor data collected from a prior lab experiment, we have shown via simulation that the Linux based system is sufficient to operate at 1kHz, with jitter well below the needs of the final system. Based on that setup we tested the end-to-end performance and latency through all parts of the system which includes the camera, the wavefront controller, and the deformable mirror. We will present our loop control structure and the results of those performance tests.
Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2008-01-01
Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.
Space-based infrared scanning sensor LOS determination and calibration using star observation
NASA Astrophysics Data System (ADS)
Chen, Jun; Xu, Zhan; An, Wei; Deng, Xin-Pu; Yang, Jun-Gang
2015-10-01
This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm.
Real-time image mosaicing for medical applications.
Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth
2007-01-01
In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.
Xu, Han-qiu; Zhang, Tie-jun
2011-07-01
The present paper investigates the quantitative relationship between the NDVI and SAVI vegetation indices of Landsat and ASTER sensors based on three tandem image pairs. The study examines how well ASTER sensor vegetation observations replicate ETM+ vegetation observations, and more importantly, the difference in the vegetation observations between the two sensors. The DN values of the three image pairs were first converted to at-sensor reflectance to reduce radiometric differences between two sensors, images. The NDVI and SAVI vegetation indices of the two sensors were then calculated using the converted reflectance. The quantitative relationship was revealed through regression analysis on the scatter plots of the vegetation index values of the two sensors. The models for the conversion between the two sensors, vegetation indices were also obtained from the regression. The results show that the difference does exist between the two sensors, vegetation indices though they have a very strong positive linear relationship. The study found that the red and near infrared measurements differ between the two sensors, with ASTER generally producing higher reflectance in the red band and lower reflectance in the near infrared band than the ETM+ sensor. This results in the ASTER sensor producing lower spectral vegetation index measurements, for the same target, than ETM+. The relative spectral response function differences in the red and near infrared bands between the two sensors are believed to be the main factor contributing to their differences in vegetation index measurements, because the red and near infrared relative spectral response features of the ASTER sensor overlap the vegetation "red edge" spectral region. The obtained conversion models have high accuracy with a RMSE less than 0.04 for both sensors' inter-conversion between corresponding vegetation indices.
Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le
2018-02-12
The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system
NASA Technical Reports Server (NTRS)
Slater, P. N.; Palmer, J. M. (Principal Investigator)
1984-01-01
The reduction of the data measured on July 8, 1984 at White Sands, New Mexico is summarized. The radiance incident at the entrance pupil of the LANDSAT 5 sensors have been computed for bands 1 to 4. When these are compared to the digital counts of the TM image, the ground based calibration for this sensor will be given. The image was received from Goddard SFC and is presently being analyzed.
2011-07-01
radar [e.g., synthetic aperture radar (SAR)]. EO/IR includes multi- and hyperspectral imaging. Signal processing of data from nonimaging sensors, such...enhanced recognition ability. Other nonimage -based techniques, such as category theory,45 hierarchical systems,46 and gradient index flow,47 are possible...the battle- field. There is a plethora of imaging and nonimaging sensors on the battlefield that are being networked together for trans- mission of
NASA Astrophysics Data System (ADS)
Nelson, Matthew P.; Tazik, Shawna K.; Bangalore, Arjun S.; Treado, Patrick J.; Klem, Ethan; Temple, Dorota
2017-05-01
Hyperspectral imaging (HSI) systems can provide detection and identification of a variety of targets in the presence of complex backgrounds. However, current generation sensors are typically large, costly to field, do not usually operate in real time and have limited sensitivity and specificity. Despite these shortcomings, HSI-based intelligence has proven to be a valuable tool, thus resulting in increased demand for this type of technology. By moving the next generation of HSI technology into a more adaptive configuration, and a smaller and more cost effective form factor, HSI technologies can help maintain a competitive advantage for the U.S. armed forces as well as local, state and federal law enforcement agencies. Operating near the physical limits of HSI system capability is often necessary and very challenging, but is often enabled by rigorous modeling of detection performance. Specific performance envelopes we consistently strive to improve include: operating under low signal to background conditions; at higher and higher frame rates; and under less than ideal motion control scenarios. An adaptable, low cost, low footprint, standoff sensor architecture we have been maturing includes the use of conformal liquid crystal tunable filters (LCTFs). These Conformal Filters (CFs) are electro-optically tunable, multivariate HSI spectrometers that, when combined with Dual Polarization (DP) optics, produce optimized spectral passbands on demand, which can readily be reconfigured, to discriminate targets from complex backgrounds in real-time. With DARPA support, ChemImage Sensor Systems (CISS™) in collaboration with Research Triangle Institute (RTI) International are developing a novel, real-time, adaptable, compressive sensing short-wave infrared (SWIR) hyperspectral imaging technology called the Reconfigurable Conformal Imaging Sensor (RCIS) based on DP-CF technology. RCIS will address many shortcomings of current generation systems and offer improvements in operational agility and detection performance, while addressing sensor weight, form factor and cost needs. This paper discusses recent test and performance modeling results of a RCIS breadboard apparatus.
Robotic Vehicle Communications Interoperability
1988-08-01
starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor
Wilkes, Thomas C.; McGonigle, Andrew J. S.; Pering, Tom D.; Taggart, Angus J.; White, Benjamin S.; Bryant, Robert G.; Willmott, Jon R.
2016-01-01
Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. PMID:27782054
Gorodkiewicz, Ewa; Breczko, Joanna; Sankiewicz, Anna
2012-04-24
A Surface Plasmon Resonance Imaging (SPRI) sensor based on bromelain or chymopapain or ficin has been developed for specific cystatin determination. Cystatin was captured from a solution by immobilized bromelain or chymopapain or ficin due to the formation of an enzyme-inhibitor complex on the biosensor surface. The influence of bromelain, chymopapain or ficin concentration, as well as the pH of the interaction on the SPRI signal, was investigated and optimized. Sensor dynamic response range is between 0-0.6 μg/ml and the detection limit is equal to 0.1 μg/ml. In order to demonstrate the sensor potential, cystatin was determined in blood plasma, urine and saliva, showing good agreement with the data reported in the literature.
NASA Technical Reports Server (NTRS)
Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel
2016-01-01
Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed..
NASA Astrophysics Data System (ADS)
Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel
2016-09-01
Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed.
Origami silicon optoelectronics for hemispherical electronic eye systems.
Zhang, Kan; Jung, Yei Hwan; Mikael, Solomon; Seo, Jung-Hun; Kim, Munho; Mi, Hongyi; Zhou, Han; Xia, Zhenyang; Zhou, Weidong; Gong, Shaoqin; Ma, Zhenqiang
2017-11-24
Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices.
Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong
2006-09-01
Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations
Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao
2017-01-01
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256
Medipix2 based CdTe microprobe for dental imaging
NASA Astrophysics Data System (ADS)
Vykydal, Z.; Fauler, A.; Fiederle, M.; Jakubek, J.; Svestkova, M.; Zwerger, A.
2011-12-01
Medical imaging devices and techniques are demanded to provide high resolution and low dose images of samples or patients. Hybrid semiconductor single photon counting devices together with suitable sensor materials and advanced techniques of image reconstruction fulfil these requirements. In particular cases such as the direct observation of dental implants also the size of the imaging device itself plays a critical role. This work presents the comparison of 2D radiographs of tooth provided by a standard commercial dental imaging system (Gendex 765DC X-ray tube with VisualiX scintillation detector) and two Medipix2 USB Lite detectors one equipped with a Si sensor (300 μm thick) and one with a CdTe sensor (1 mm thick). Single photon counting capability of the Medipix2 device allows virtually unlimited dynamic range of the images and thus increases the contrast significantly. The dimensions of the whole USB Lite device are only 15 mm × 60 mm of which 25% consists of the sensitive area. Detector of this compact size can be used directly inside the patients' mouth.
Fiber optic sensors and systems at the Federal University of Rio de Janeiro
NASA Astrophysics Data System (ADS)
Werneck, Marcelo M.; dos Santos, Paulo A. M.; Ferreira, Aldo P.; Maggi, Luis E.; de Carvalho, Carlos R., Jr.; Ribeiro, R. M.
1998-08-01
As widely known, fiberoptics (FO) are being used in a large variety of sensors and systems particularly for their small dimensions and low cost, large bandwidth and favorable dielectric properties. These properties have allowed us to develop sensors and systems for general applications and, particularly, for biomedical engineering. The intravascular pressure sensor was designed for small dimensions and high bandwidth. The system is based on light-intensity modulation technique and uses a 2 mm-diameter elastomer membrane as the sensor element and a pigtailed laser as a light source. The optical power output curve was linear for pressures within the range of 0 to 300 mmHg. The real time optical biosensor uses the evanescent field technique for monitoring Escherichia coli growth in culture media. The optical biosensor monitors interactions between the analytic (bacteria) and the evanescent field of an optical fiber passing through it. The FO based high voltage and current sensor is a measuring system designed for monitoring voltage and current in high voltage transmission lines. The linearity of the system is better than 2% in both ranges of 0 to 25 kV and 0 to 1000 A. The optical flowmeter uses a cross-correlation technique that analyses two light beams crossing the flow separated by a fixed distance. The x-ray image sensor uses a scintillating FO array, one FO for each image pixel to form an image of the x-ray field. The systems described in these paper use general-purpose components including optical fibers and optoelectronic devices, which are readily available, and of low cost.
Research progress in fiber optic sensors and systems at the Federal University of Rio de Janeiro
NASA Astrophysics Data System (ADS)
Werneck, Marcelo M.; Ferreira, Aldo P.; Maggi, Luis E.; De Carvalho, C. C.; Ribeiro, R. M.
1999-02-01
As widely known, fiberoptics (FO) are being used in a large variety of sensor an systems particularly for their small dimensions and low cost, large bandwidth and favorable dielectric properties. These properties have allowed us to develop sensor and systems for general applications and, particularly, for biomedical engineering. The intravasculator pressure sensor was designed for small dimensions and high bandwidth. The system is based on light- intensity modulation technique and use a 2 mm-diameter elastomer membrane as the sensor element and a pigtailed laser as a light source. The optical power out put curve was linear for pressures within the range of 0 to 300 mmHg. The real time optical biosensor uses the evanescent field technique for monitoring Escherichia coli growth in culture media. The optical biosensor monitors interactions between the analytic and the evanescent field of an optical fiber passing through it. The FO based high voltage and current sensor is a measuring system designed for monitoring voltage and current in high voltage transmission lines. The linearity of the system is better than 2 percent in both ranges of 0 to 25 kV and 0 to 1000 A. The optical flowmeter uses a cross-correlation technique that analyzes two light beams crossing the flow separated by a fixed distance. The x-ray image sensor uses a scintillating FO array, one FO for each image pixel to form an image of the x-ray field. The systems described in this paper use general-purpose components including optical fibers and optoelectronic devices, which are readily available, and of low cost.
Satellite-based Tropical Cyclone Monitoring Capabilities
NASA Astrophysics Data System (ADS)
Hawkins, J.; Richardson, K.; Surratt, M.; Yang, S.; Lee, T. F.; Sampson, C. R.; Solbrig, J.; Kuciauskas, A. P.; Miller, S. D.; Kent, J.
2012-12-01
Satellite remote sensing capabilities to monitor tropical cyclone (TC) location, structure, and intensity have evolved by utilizing a combination of operational and research and development (R&D) sensors. The microwave imagers from the operational Defense Meteorological Satellite Program [Special Sensor Microwave/Imager (SSM/I) and the Special Sensor Microwave Imager Sounder (SSMIS)] form the "base" for structure observations due to their ability to view through upper-level clouds, modest size swaths and ability to capture most storm structure features. The NASA TRMM microwave imager and precipitation radar continue their 15+ yearlong missions in serving the TC warning and research communities. The cessation of NASA's QuikSCAT satellite after more than a decade of service is sorely missed, but India's OceanSat-2 scatterometer is now providing crucial ocean surface wind vectors in addition to the Navy's WindSat ocean surface wind vector retrievals. Another Advanced Scatterometer (ASCAT) onboard EUMETSAT's MetOp-2 satellite is slated for launch soon. Passive microwave imagery has received a much needed boost with the launch of the French/Indian Megha Tropiques imager in September 2011, basically greatly supplementing the very successful NASA TRMM pathfinder with a larger swath and more frequent temporal sampling. While initial data issues have delayed data utilization, current news indicates this data will be available in 2013. Future NASA Global Precipitation Mission (GPM) sensors starting in 2014 will provide enhanced capabilities. Also, the inclusion of the new microwave sounder data from the NPP ATMS (Oct 2011) will assist in mapping TC convective structures. The National Polar orbiting Partnership (NPP) program's VIIRS sensor includes a day night band (DNB) with the capability to view TC cloud structure at night when sufficient lunar illumination exits. Examples highlighting this new capability will be discussed in concert with additional data fusion efforts.
New sensor technologies in quality evaluation of Chinese materia medica: 2010-2015.
Miao, Xiaosu; Cui, Qingyu; Wu, Honghui; Qiao, Yanjiang; Zheng, Yanfei; Wu, Zhisheng
2017-03-01
New sensor technologies play an important role in quality evaluation of Chinese materia medica (CMM) and include near-infrared spectroscopy, chemical imaging, electronic nose and electronic tongue. This review on quality evaluation of CMM and the application of the new sensors in this assessment is based on studies from 2010 to 2015, with prospects and opportunities for future research.
A robust vision-based sensor fusion approach for real-time pose estimation.
Assa, Akbar; Janabi-Sharifi, Farrokh
2014-02-01
Object pose estimation is of great importance to many applications, such as augmented reality, localization and mapping, motion capture, and visual servoing. Although many approaches based on a monocular camera have been proposed, only a few works have concentrated on applying multicamera sensor fusion techniques to pose estimation. Higher accuracy and enhanced robustness toward sensor defects or failures are some of the advantages of these schemes. This paper presents a new Kalman-based sensor fusion approach for pose estimation that offers higher accuracy and precision, and is robust to camera motion and image occlusion, compared to its predecessors. Extensive experiments are conducted to validate the superiority of this fusion method over currently employed vision-based pose estimation algorithms.
Kangas, Michael J; Burks, Raychelle M; Atwater, Jordyn; Lukowicz, Rachel M; Garver, Billy; Holmes, Andrea E
2018-02-01
With the increasing availability of digital imaging devices, colorimetric sensor arrays are rapidly becoming a simple, yet effective tool for the identification and quantification of various analytes. Colorimetric arrays utilize colorimetric data from many colorimetric sensors, with the multidimensional nature of the resulting data necessitating the use of chemometric analysis. Herein, an 8 sensor colorimetric array was used to analyze select acid and basic samples (0.5 - 10 M) to determine which chemometric methods are best suited for classification quantification of analytes within clusters. PCA, HCA, and LDA were used to visualize the data set. All three methods showed well-separated clusters for each of the acid or base analytes and moderate separation between analyte concentrations, indicating that the sensor array can be used to identify and quantify samples. Furthermore, PCA could be used to determine which sensors showed the most effective analyte identification. LDA, KNN, and HQI were used for identification of analyte and concentration. HQI and KNN could be used to correctly identify the analytes in all cases, while LDA correctly identified 95 of 96 analytes correctly. Additional studies demonstrated that controlling for solvent and image effects was unnecessary for all chemometric methods utilized in this study.
Bioinspired polarization navigation sensor for autonomous munitions systems
NASA Astrophysics Data System (ADS)
Giakos, G. C.; Quang, T.; Farrahi, T.; Deshpande, A.; Narayan, C.; Shrestha, S.; Li, Y.; Agarwal, M.
2013-05-01
Small unmanned aerial vehicles UAVs (SUAVs), micro air vehicles (MAVs), Automated Target Recognition (ATR), and munitions guidance, require extreme operational agility and robustness which can be partially offset by efficient bioinspired imaging sensor designs capable to provide enhanced guidance, navigation and control capabilities (GNC). Bioinspired-based imaging technology can be proved useful either for long-distance surveillance of targets in a cluttered environment, or at close distances limited by space surroundings and obstructions. The purpose of this study is to explore the phenomenology of image formation by different insect eye architectures, which would directly benefit the areas of defense and security, on the following four distinct areas: a) fabrication of the bioinspired sensor b) optical architecture, c) topology, and d) artificial intelligence. The outcome of this study indicates that bioinspired imaging can impact the areas of defense and security significantly by dedicated designs fitting into different combat scenarios and applications.
Feed rate measuring method and system
Novak, J.L.; Wiczer, J.J.
1995-12-05
A system and method are provided for establishing the feed rate of a workpiece along a feed path with respect to a machine device. First and second sensors each having first and second sensing electrodes which are electrically isolated from the workpiece are positioned above, and in proximity to the desired surfaces of the workpiece along a feed path. An electric field is developed between the first and second sensing electrodes of each sensor and capacitance signals are developed which are indicative of the contour of the workpiece. First and second image signals representative of the contour of the workpiece along the feed path are developed by an image processor. The time delay between corresponding portions of the first and second image signals are then used to determine the feed rate based upon the separation of the first and second sensors and the amount of time between corresponding portions of the first and second image signals. 18 figs.
Feed rate measuring method and system
Novak, James L.; Wiczer, James J.
1995-01-01
A system and method are provided for establishing the feed rate of a workpiece along a feed path with respect to a machine device. First and second sensors each having first and second sensing electrodes which are electrically isolated from the workpiece are positioned above, and in proximity to the desired surfaces of the workpiece along a feed path. An electric field is developed between the first and second sensing electrodes of each sensor and capacitance signals are developed which are indicative of the contour of the workpiece. First and second image signals representative of the contour of the workpiece along the feed path are developed by an image processor. The time delay between corresponding portions of the first and second image signals are then used to determine the feed rate based upon the separation of the first and second sensors and the amount of time between corresponding portions of the first and second image signals.
A Web-based home welfare and care services support system using a pen type image sensor.
Ogawa, Hidekuni; Yonezawa, Yoshiharu; Maki, Hiromichi; Sato, Haruhiko; Hahn, Allen W; Caldwell, W Morton
2003-01-01
A long-term care insurance law for elderly persons was put in force two years ago in Japan. The Home Helpers, who are employed by hospitals, care companies or the welfare office, provide home welfare and care services for the elderly, such as cooking, bathing, washing, cleaning, shopping, etc. We developed a web-based home welfare and care services support system using wireless Internet mobile phones and Internet client computers, which employs a pen type image sensor. The pen type image sensor is used by the elderly people as the entry device for their care requests. The client computer sends the requests to the server computer in the Home Helper central office, and then the server computer automatically transfers them to the Home Helper's mobile phone. This newly-developed home welfare and care services support system is easily operated by elderly persons and enables Homes Helpers to save a significant amount of time and extra travel.
Inferring Interaction Force from Visual Information without Using Physical Force Sensors.
Hwang, Wonjun; Lim, Soo-Chul
2017-10-26
In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.
Onboard TDI stage estimation and calibration using SNR analysis
NASA Astrophysics Data System (ADS)
Haghshenas, Javad
2017-09-01
Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is performed based on the noise analysis of TDI sensors for very high GSDs and low light level missions. It is well demonstrated that the CCD TDI mode of operation provides increased photosensitivity relative to a linear CCD array, without the sacrifice of spatial resolution. However, for satellite imaging, in order to utilize the advantages which the TDI mode of operation offers, attention should be given to the parameters which affect the image quality of TDI sensors such as jitters, vibrations, noises and etc. A predefined TDI stages may not properly satisfy image quality requirement of the satellite camera. Furthermore, in order to use the whole dynamic range of the sensor, imager must be capable to set the TDI stages in every shots based on the affecting parameters. This paper deals with the optimal estimation and setting the stages based on tradeoffs among MTF, noises and SNR. On-board SNR estimation is simulated using the atmosphere analysis based on the MODTRAN algorithm in PcModWin software. According to the noises models, we have proposed a formulation to estimate TDI stages in such a way to satisfy the system SNR requirement. On the other hand, MTF requirement must be satisfy in the same manner. A proper combination of both parameters will guaranty the full dynamic range usage along with the high SNR and image quality.
Micromachined Chip Scale Thermal Sensor for Thermal Imaging.
Shekhawat, Gajendra S; Ramachandran, Srinivasan; Jiryaei Sharahi, Hossein; Sarkar, Souravi; Hujsak, Karl; Li, Yuan; Hagglund, Karl; Kim, Seonghwan; Aden, Gary; Chand, Ami; Dravid, Vinayak P
2018-02-27
The lateral resolution of scanning thermal microscopy (SThM) has hitherto never approached that of mainstream atomic force microscopy, mainly due to poor performance of the thermal sensor. Herein, we report a nanomechanical system-based thermal sensor (thermocouple) that enables high lateral resolution that is often required in nanoscale thermal characterization in a wide range of applications. This thermocouple-based probe technology delivers excellent lateral resolution (∼20 nm), extended high-temperature measurements >700 °C without cantilever bending, and thermal sensitivity (∼0.04 °C). The origin of significantly improved figures-of-merit lies in the probe design that consists of a hollow silicon tip integrated with a vertically oriented thermocouple sensor at the apex (low thermal mass) which interacts with the sample through a metallic nanowire (50 nm diameter), thereby achieving high lateral resolution. The efficacy of this approach to SThM is demonstrated by imaging embedded metallic nanostructures in silica core-shell, metal nanostructures coated with polymer films, and metal-polymer interconnect structures. The nanoscale pitch and extremely small thermal mass of the probe promise significant improvements over existing methods and wide range of applications in several fields including semiconductor industry, biomedical imaging, and data storage.
NASA Astrophysics Data System (ADS)
Kornilin, Dmitriy V.; Kudryavtsev, Ilya A.; McMillan, Alison J.; Osanlou, Ardeshir; Ratcliffe, Ian
2017-06-01
Modern hydraulic systems should be monitored on the regular basis. One of the most effective ways to address this task is utilizing in-line automatic particle counters (APC) built inside of the system. The measurement of particle concentration in hydraulic liquid by APC is crucial because increasing numbers of particles should mean functional problems. Existing automatic particle counters have significant limitation for the precise measurement of relatively low concentration of particle in aerospace systems or they are unable to measure higher concentration in industrial ones. Both issues can be addressed by implementation of the CMOS image sensor instead of single photodiode used in the most of APC. CMOS image sensor helps to overcome the problem of the errors in volume measurement caused by inequality of particle speed inside of tube. Correction is based on the determination of the particle position and parabolic velocity distribution profile. Proposed algorithms are also suitable for reducing the errors related to the particles matches in measurement volume. The results of simulation show that the accuracy increased up to 90 per cent and the resolution improved ten times more compared to the single photodiode sensor.
Heuristic approach to image registration
NASA Astrophysics Data System (ADS)
Gertner, Izidor; Maslov, Igor V.
2000-08-01
Image registration, i.e. correct mapping of images obtained from different sensor readings onto common reference frame, is a critical part of multi-sensor ATR/AOR systems based on readings from different types of sensors. In order to fuse two different sensor readings of the same object, the readings have to be put into a common coordinate system. This task can be formulated as optimization problem in a space of all possible affine transformations of an image. In this paper, a combination of heuristic methods is explored to register gray- scale images. The modification of Genetic Algorithm is used as the first step in global search for optimal transformation. It covers the entire search space with (randomly or heuristically) scattered probe points and helps significantly reduce the search space to a subspace of potentially most successful transformations. Due to its discrete character, however, Genetic Algorithm in general can not converge while coming close to the optimum. Its termination point can be specified either as some predefined number of generations or as achievement of a certain acceptable convergence level. To refine the search, potential optimal subspaces are searched using more delicate and efficient for local search Taboo and Simulated Annealing methods.
Imaging Beyond What Man Can See
NASA Technical Reports Server (NTRS)
May, George; Mitchell, Brian
2004-01-01
Three lightweight, portable hyperspectral sensor systems have been built that capture energy from 200 to 1700 nanometers (ultravio1et to shortwave infrared). The sensors incorporate a line scanning technique that requires no relative movement between the target and the sensor. This unique capability, combined with portability, opens up new uses of hyperspectral imaging for laboratory and field environments. Each system has a GUI-based software package that allows the user to communicate with the imaging device for setting spatial resolution, spectral bands and other parameters. NASA's Space Partnership Development has sponsored these innovative developments and their application to human problems on Earth and in space. Hyperspectral datasets have been captured and analyzed in numerous areas including precision agriculture, food safety, biomedical imaging, and forensics. Discussion on research results will include realtime detection of food contaminants, molds and toxin research on corn, identifying counterfeit documents, non-invasive wound monitoring and aircraft applications. Future research will include development of a thermal infrared hyperspectral sensor that will support natural resource applications on Earth and thermal analyses during long duration space flight. This paper incorporates a variety of disciplines and imaging technologies that have been linked together to allow the expansion of remote sensing across both traditional and non-traditional boundaries.
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-11-17
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
Multi Reflection of Lamb Wave Emission in an Acoustic Waveguide Sensor
Schmitt, Martin; Olfert, Sergei; Rautenberg, Jens; Lindner, Gerhard; Henning, Bernd; Reindl, Leonhard Michael
2013-01-01
Recently, an acoustic waveguide sensor based on multiple mode conversion of surface acoustic waves at the solid—liquid interfaces has been introduced for the concentration measurement of binary and ternary mixtures, liquid level sensing, investigation of spatial inhomogenities or bubble detection. In this contribution the sound wave propagation within this acoustic waveguide sensor is visualized by Schlieren imaging for continuous and burst operation the first time. In the acoustic waveguide the antisymmetrical zero order Lamb wave mode is excited by a single phase transducer of 1 MHz on thin glass plates of 1 mm thickness. By contact to the investigated liquid Lamb waves propagating on the first plate emit pressure waves into the adjacent liquid, which excites Lamb waves on the second plate, what again causes pressure waves traveling inside the liquid back to the first plate and so on. The Schlieren images prove this multi reflection within the acoustic waveguide, which confirms former considerations and calculations based on the receiver signal. With this knowledge the sensor concepts with the acoustic waveguide sensor can be interpreted in a better manner. PMID:23447010
Multi reflection of Lamb wave emission in an acoustic waveguide sensor.
Schmitt, Martin; Olfert, Sergei; Rautenberg, Jens; Lindner, Gerhard; Henning, Bernd; Reindl, Leonhard Michael
2013-02-27
Recently, an acoustic waveguide sensor based on multiple mode conversion of surface acoustic waves at the solid-liquid interfaces has been introduced for the concentration measurement of binary and ternary mixtures, liquid level sensing, investigation of spatial inhomogenities or bubble detection. In this contribution the sound wave propagation within this acoustic waveguide sensor is visualized by Schlieren imaging for continuous and burst operation the first time. In the acoustic waveguide the antisymmetrical zero order Lamb wave mode is excited by a single phase transducer of 1 MHz on thin glass plates of 1 mm thickness. By contact to the investigated liquid Lamb waves propagating on the first plate emit pressure waves into the adjacent liquid, which excites Lamb waves on the second plate, what again causes pressure waves traveling inside the liquid back to the first plate and so on. The Schlieren images prove this multi reflection within the acoustic waveguide, which confirms former considerations and calculations based on the receiver signal. With this knowledge the sensor concepts with the acoustic waveguide sensor can be interpreted in a better manner.
An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
Horie, Yu; Han, Seunghoon; Lee, Jeong-Yub; Kim, Jaekwan; Kim, Yongsung; Arbabi, Amir; Shin, Changgyun; Shi, Lilong; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Lee, Hong-Seok; Hwang, Sungwoo; Faraon, Andrei
2017-05-10
We report transmissive color filters based on subwavelength dielectric gratings that can replace conventional dye-based color filters used in backside-illuminated CMOS image sensor (BSI CIS) technologies. The filters are patterned in an 80 nm-thick poly silicon film on a 115 nm-thick SiO 2 spacer layer. They are optimized for operating at the primary RGB colors, exhibit peak transmittance of 60-80%, and have an almost insensitive response over a ± 20° angular range. This technology enables shrinking of the pixel sizes down to near a micrometer.
Adaptive target binarization method based on a dual-camera system
NASA Astrophysics Data System (ADS)
Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing
2018-01-01
An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.
Comparison of NDVI fields obtained from different remote sensors
NASA Astrophysics Data System (ADS)
Escribano Rodriguez, Juan; Alonso, Carmelo; Tarquis, Ana Maria; Benito, Rosa Maria; Hernandez Díaz-Ambrona, Carlos
2013-04-01
Satellite image data have become an important source of information for monitoring vegetation and mapping land cover at several scales. Beside this, the distribution and phenology of vegetation is largely associated with climate, terrain characteristics and human activity. Various vegetation indices have been developed for qualitative and quantitative assessment of vegetation using remote spectral measurements. In particular, sensors with spectral bands in the red (RED) and near-infrared (NIR) lend themselves well to vegetation monitoring and based on them [(NIR - RED) / (NIR + RED)] Normalized Difference Vegetation Index (NDVI) has been widespread used. Given that the characteristics of spectral bands in RED and NIR vary distinctly from sensor to sensor, NDVI values based on data from different instruments will not be directly comparable. The spatial resolution also varies significantly between sensors, as well as within a given scene in the case of wide-angle and oblique sensors. As a result, NDVI values will vary according to combinations of the heterogeneity and scale of terrestrial surfaces and pixel footprint sizes. Therefore, the question arises as to the impact of differences in spectral and spatial resolutions on vegetation indices like the NDVI and their interpretation as a drought index. During 2012 three locations (at Salamanca, Granada and Córdoba) were selected and a periodic pasture monitoring and botanic composition were achieved. Daily precipitation, temperature and monthly soil water content were measurement as well as fresh and dry pasture weight. At the same time, remote sensing images were capture by DEIMOS-1 and MODIS of the chosen places. DEIMOS-1 is based on the concept Microsat-100 from Surrey. It is conceived for obtaining Earth images with a good enough resolution to study the terrestrial vegetation cover (20x20 m), although with a great range of visual field (600 km) in order to obtain those images with high temporal resolution and at a reduced cost. By contranst, MODIS images present a much lower spatial resolution (500x500 m). The aim of this study is to establish a comparison between two different sensors in their NDVI values at different spatial resolutions. Acknowledgements. This work was partially supported by ENESA under project P10 0220C-823. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. MTM2009-14621 and i-MATH No. CSD2006-00032 is greatly appreciated.
NASA Astrophysics Data System (ADS)
Chander, Gyanesh; Helder, Dennis L.; Malla, Rimy; Micijevic, Esad; Mettler, Cory J.
2007-09-01
The Landsat archive provides more than 35 years of uninterrupted multispectral remotely sensed data of Earth observations. Since 1972, Landsat missions have carried different types of sensors, from the Return Beam Vidicon (RBV) camera to the Enhanced Thematic Mapper Plus (ETM+). However, the Thematic Mapper (TM) sensors on Landsat 4 (L4) and Landsat 5 (L5), launched in 1982 and 1984 respectively, are the backbone of an extensive archive. Effective April 2, 2007, the radiometric calibration of L5 TM data processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) was updated to use an improved lifetime gain model, based on the instrument's detector response to pseudo-invariant desert site data and cross-calibration with the L7 ETM+. However, no modifications were ever made to the radiometric calibration procedure of the Landsat 4 (L4) TM data. The L4 TM radiometric calibration procedure has continued to use the Internal Calibrator (IC) based calibration algorithms and the post calibration dynamic ranges, as previously defined. To evaluate the "current" absolute accuracy of these two sensors, image pairs from the L5 TM and L4 TM sensors were compared. The number of coincident image pairs in the USGS EROS archive is limited, so the scene selection for the cross-calibration studies proved to be a challenge. Additionally, because of the lack of near-simultaneous images available over well-characterized and traditionally used calibration sites, alternate sites that have high reflectance, large dynamic range, high spatial uniformity, high sun elevation, and minimal cloud cover were investigated. The alternate sites were identified in Yuma, Iraq, Egypt, Libya, and Algeria. The cross-calibration approach involved comparing image statistics derived from large common areas observed eight days apart by the two sensors. This paper summarizes the average percent differences in reflectance estimates obtained between the two sensors. The work presented in this paper is a first step in understanding the current performance of L4 TM absolute calibration and potentially serves as a platform to revise and improve the radiometric calibration procedures implemented for the processing of L4 TM data.
Chander, G.; Helder, D.L.; Malla, R.; Micijevic, E.; Mettler, C.J.
2007-01-01
The Landsat archive provides more than 35 years of uninterrupted multispectral remotely sensed data of Earth observations. Since 1972, Landsat missions have carried different types of sensors, from the Return Beam Vidicon (RBV) camera to the Enhanced Thematic Mapper Plus (ETM+). However, the Thematic Mapper (TM) sensors on Landsat 4 (L4) and Landsat 5 (L5), launched in 1982 and 1984 respectively, are the backbone of an extensive archive. Effective April 2, 2007, the radiometric calibration of L5 TM data processed and distributed by the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) was updated to use an improved lifetime gain model, based on the instrument's detector response to pseudo-invariant desert site data and cross-calibration with the L7 ETM+. However, no modifications were ever made to the radiometric calibration procedure of the Landsat 4 (L4) TM data. The L4 TM radiometric calibration procedure has continued to use the Internal Calibrator (IC) based calibration algorithms and the post calibration dynamic ranges, as previously defined. To evaluate the "current" absolute accuracy of these two sensors, image pairs from the L5 TM and L4 TM sensors were compared. The number of coincident image pairs in the USGS EROS archive is limited, so the scene selection for the cross-calibration studies proved to be a challenge. Additionally, because of the lack of near-simultaneous images available over well-characterized and traditionally used calibration sites, alternate sites that have high reflectance, large dynamic range, high spatial uniformity, high sun elevation, and minimal cloud cover were investigated. The alternate sites were identified in Yuma, Iraq, Egypt, Libya, and Algeria. The cross-calibration approach involved comparing image statistics derived from large common areas observed eight days apart by the two sensors. This paper summarizes the average percent differences in reflectance estimates obtained between the two sensors. The work presented in this paper is a first step in understanding the current performance of L4 TM absolute calibration and potentially serves as a platform to revise and improve the radiometric calibration procedures implemented for the processing of L4 TM data.
Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors
Pagliari, Diana; Pinto, Livio
2015-01-01
In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries. PMID:26528979
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.
2003-01-01
A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.
Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors.
Pagliari, Diana; Pinto, Livio
2015-10-30
In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.
Parmaksızoğlu, Selami; Alçı, Mustafa
2011-01-01
Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods.
Parmaksızoğlu, Selami; Alçı, Mustafa
2011-01-01
Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods. PMID:22163903
Evaluating sensor linearity of chosen infrared sensors
NASA Astrophysics Data System (ADS)
Walczykowski, P.; Orych, A.; Jenerowicz, A.; Karcz, P.
2014-11-01
The paper describes a series of experiments conducted as part of the IRAMSWater Project, the aim of which is to establish methodologies for detecting and identifying pollutants in water bodies using aerial imagery data. The main idea is based on the hypothesis, that it is possible to identify certain types of physical, biological and chemical pollutants based on their spectral reflectance characteristics. The knowledge of these spectral curves is then used to determine very narrow spectral bands in which greatest reflectance variations occur between these pollutants. A frame camera is then equipped with a band pass filter, which allows only the selected bandwidth to be registered. In order to obtain reliable reflectance data straight from the images, the team at the Military University of Technology had developed a methodology for determining the necessary acquisition parameters for the sensor (integration time and f-stop depending on the distance from the scene and it's illumination). This methodology however is based on the assumption, that the imaging sensors have a linear response. This paper shows the results of experiments used to evaluate this linearity.
Kašalynas, Irmantas; Venckevičius, Rimvydas; Minkevičius, Linas; Sešek, Aleksander; Wahaia, Faustino; Tamošiūnas, Vincas; Voisiat, Bogdan; Seliuta, Dalius; Valušis, Gintaras; Švigelj, Andrej; Trontelj, Janez
2016-01-01
A terahertz (THz) imaging system based on narrow band microbolometer sensors (NBMS) and a novel diffractive lens was developed for spectroscopic microscopy applications. The frequency response characteristics of the THz antenna-coupled NBMS were determined employing Fourier transform spectroscopy. The NBMS was found to be a very sensitive frequency selective sensor which was used to develop a compact all-electronic system for multispectral THz measurements. This system was successfully applied for principal components analysis of optically opaque packed samples. A thin diffractive lens with a numerical aperture of 0.62 was proposed for the reduction of system dimensions. The THz imaging system enhanced with novel optics was used to image for the first time non-neoplastic and neoplastic human colon tissues with close to wavelength-limited spatial resolution at 584 GHz frequency. The results demonstrated the new potential of compact RT THz imaging systems in the fields of spectroscopic analysis of materials and medical diagnostics. PMID:27023551
Scintillating Quantum Dots for Imaging X-Rays (SQDIX) for Aircraft Inspection
NASA Technical Reports Server (NTRS)
Burke, E. R.; DeHaven, S. L.; Williams, P. A.
2015-01-01
Scintillation is the process currently employed by conventional X-ray detectors to create X-ray images. Scintillating quantum dots (StQDs) or nano-crystals are novel, nanometer-scale materials that upon excitation by X-rays, re-emit the absorbed energy as visible light. StQDs theoretically have higher output efficiency than conventional scintillating materials and are more environmentally friendly. This paper will present the characterization of several critical elements in the use of StQDs that have been performed along a path to the use of this technology in wide spread X-ray imaging. Initial work on the scintillating quantum dots for imaging X-rays (SQDIX) system has shown great promise to create state-of-the-art sensors using StQDs as a sensor material. In addition, this work also demonstrates a high degree of promise using StQDs in microstructured fiber optics. Using the microstructured fiber as a light guide could greatly increase the capture efficiency of a StQDs based imaging sensor.
The AOLI low-order non-linear curvature wavefront sensor: laboratory and on-sky results
NASA Astrophysics Data System (ADS)
Crass, Jonathan; King, David; MacKay, Craig
2014-08-01
Many adaptive optics (AO) systems in use today require the use of bright reference objects to determine the effects of atmospheric distortions. Typically these systems use Shack-Hartmann Wavefront sensors (SHWFS) to distribute incoming light from a reference object between a large number of sub-apertures. Guyon et al. evaluated the sensitivity of several different wavefront sensing techniques and proposed the non-linear Curvature Wavefront Sensor (nlCWFS) offering improved sensitivity across a range of orders of distortion. On large ground-based telescopes this can provide nearly 100% sky coverage using natural guide stars. We present work being undertaken on the nlCWFS development for the Adaptive Optics Lucky Imager (AOLI) project. The wavefront sensor is being developed as part of a low-order adaptive optics system for use in a dedicated instrument providing an AO corrected beam to a Lucky Imaging based science detector. The nlCWFS provides a total of four reference images on two photon-counting EMCCDs for use in the wavefront reconstruction process. We present results from both laboratory work using a calibration system and the first on-sky data obtained with the nlCWFS at the 4.2 metre William Herschel Telescope, La Palma. In addition, we describe the updated optical design of the wavefront sensor, strategies for minimising intrinsic effects and methods to maximise sensitivity using photon-counting detectors. We discuss on-going work to develop the high speed reconstruction algorithm required for the nlCWFS technique. This includes strategies to implement the technique on graphics processing units (GPUs) and to minimise computing overheads to obtain a prior for a rapid convergence of the wavefront reconstruction. Finally we evaluate the sensitivity of the wavefront sensor based upon both data and low-photon count strategies.
Sensor-oriented feature usability evaluation in fingerprint segmentation
NASA Astrophysics Data System (ADS)
Li, Ying; Yin, Yilong; Yang, Gongping
2013-06-01
Existing fingerprint segmentation methods usually process fingerprint images captured by different sensors with the same feature or feature set. We propose to improve the fingerprint segmentation result in view of an important fact that images from different sensors have different characteristics for segmentation. Feature usability evaluation, which means to evaluate the usability of features to find the personalized feature or feature set for different sensors to improve the performance of segmentation. The need for feature usability evaluation for fingerprint segmentation is raised and analyzed as a new issue. To address this issue, we present a decision-tree-based feature-usability evaluation method, which utilizes a C4.5 decision tree algorithm to evaluate and pick the best suitable feature or feature set for fingerprint segmentation from a typical candidate feature set. We apply the novel method on the FVC2002 database of fingerprint images, which are acquired by four different respective sensors and technologies. Experimental results show that the accuracy of segmentation is improved, and time consumption for feature extraction is dramatically reduced with selected feature(s).
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
Shilemay, Moshe; Rozban, Daniel; Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S; Yadid-Pecht, Orly; Abramovich, Amir
2013-03-01
Inexpensive millimeter-wavelength (MMW) optical digital imaging raises a challenge of evaluating the imaging performance and image quality because of the large electromagnetic wavelengths and pixel sensor sizes, which are 2 to 3 orders of magnitude larger than those of ordinary thermal or visual imaging systems, and also because of the noisiness of the inexpensive glow discharge detectors that compose the focal-plane array. This study quantifies the performances of this MMW imaging system. Its point-spread function and modulation transfer function were investigated. The experimental results and the analysis indicate that the image quality of this MMW imaging system is limited mostly by the noise, and the blur is dominated by the pixel sensor size. Therefore, the MMW image might be improved by oversampling, given that noise reduction is achieved. Demonstration of MMW image improvement through oversampling is presented.
Statistical lamb wave localization based on extreme value theory
NASA Astrophysics Data System (ADS)
Harley, Joel B.
2018-04-01
Guided wave localization methods based on delay-and-sum imaging, matched field processing, and other techniques have been designed and researched to create images that locate and describe structural damage. The maximum value of these images typically represent an estimated damage location. Yet, it is often unclear if this maximum value, or any other value in the image, is a statistically significant indicator of damage. Furthermore, there are currently few, if any, approaches to assess the statistical significance of guided wave localization images. As a result, we present statistical delay-and-sum and statistical matched field processing localization methods to create statistically significant images of damage. Our framework uses constant rate of false alarm statistics and extreme value theory to detect damage with little prior information. We demonstrate our methods with in situ guided wave data from an aluminum plate to detect two 0.75 cm diameter holes. Our results show an expected improvement in statistical significance as the number of sensors increase. With seventeen sensors, both methods successfully detect damage with statistical significance.
Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors.
Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo
2017-01-22
The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors' radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors' application, and as such will promote the development of Chinese satellite data.
BCB Bonding Technology of Back-Side Illuminated COMS Device
NASA Astrophysics Data System (ADS)
Wu, Y.; Jiang, G. Q.; Jia, S. X.; Shi, Y. M.
2018-03-01
Back-side illuminated CMOS(BSI) sensor is a key device in spaceborne hyperspectral imaging technology. Compared with traditional devices, the path of incident light is simplified and the spectral response is planarized by BSI sensors, which meets the requirements of quantitative hyperspectral imaging applications. Wafer bonding is the basic technology and key process of the fabrication of BSI sensors. 6 inch bonding of CMOS wafer and glass wafer was fabricated based on the low bonding temperature and high stability of BCB. The influence of different thickness of BCB on bonding strength was studied. Wafer bonding with high strength, high stability and no bubbles was fabricated by changing bonding conditions.
A 4MP high-dynamic-range, low-noise CMOS image sensor
NASA Astrophysics Data System (ADS)
Ma, Cheng; Liu, Yang; Li, Jing; Zhou, Quan; Chang, Yuchun; Wang, Xinyang
2015-03-01
In this paper we present a 4 Megapixel high dynamic range, low dark noise and dark current CMOS image sensor, which is ideal for high-end scientific and surveillance applications. The pixel design is based on a 4-T PPD structure. During the readout of the pixel array, signals are first amplified, and then feed to a low- power column-parallel ADC array which is already presented in [1]. Measurement results show that the sensor achieves a dynamic range of 96dB, a dark noise of 1.47e- at 24fps speed. The dark current is 0.15e-/pixel/s at -20oC.
Panoramic thermal imaging: challenges and tradeoffs
NASA Astrophysics Data System (ADS)
Aburmad, Shimon
2014-06-01
Over the past decade, we have witnessed a growing demand for electro-optical systems that can provide continuous 3600 coverage. Applications such as perimeter security, autonomous vehicles, and military warning systems are a few of the most common applications for panoramic imaging. There are several different technological approaches for achieving panoramic imaging. Solutions based on rotating elements do not provide continuous coverage as there is a time lag between updates. Continuous panoramic solutions either use "stitched" images from multiple adjacent sensors, or sophisticated optical designs which warp a panoramic view onto a single sensor. When dealing with panoramic imaging in the visible spectrum, high volume production and advancement of semiconductor technology has enabled the use of CMOS/CCD image sensors with a huge number of pixels, small pixel dimensions, and low cost devices. However, in the infrared spectrum, the growth of detector pixel counts, pixel size reduction, and cost reduction is taking place at a slower rate due to the complexity of the technology and limitations caused by the laws of physics. In this work, we will explore the challenges involved in achieving 3600 panoramic thermal imaging, and will analyze aspects such as spatial resolution, FOV, data complexity, FPA utilization, system complexity, coverage and cost of the different solutions. We will provide illustrations, calculations, and tradeoffs between three solutions evaluated by Opgal: A unique 3600 lens design using an LWIR XGA detector, stitching of three adjacent LWIR sensors equipped with a low distortion 1200 lens, and a fisheye lens with a HFOV of 180º and an XGA sensor.
A Forest Fire Sensor Web Concept with UAVSAR
NASA Astrophysics Data System (ADS)
Lou, Y.; Chien, S.; Clark, D.; Doubleday, J.; Muellerschoen, R.; Zheng, Y.
2008-12-01
We developed a forest fire sensor web concept with a UAVSAR-based smart sensor and onboard automated response capability that will allow us to monitor fire progression based on coarse initial information provided by an external source. This autonomous disturbance detection and monitoring system combines the unique capabilities of imaging radar with high throughput onboard processing technology and onboard automated response capability based on specific science algorithms. In this forest fire sensor web scenario, a fire is initially located by MODIS/RapidFire or a ground-based fire observer. This information is transmitted to the UAVSAR onboard automated response system (CASPER). CASPER generates a flight plan to cover the alerted fire area and executes the flight plan. The onboard processor generates the fuel load map from raw radar data, used with wind and elevation information, predicts the likely fire progression. CASPER then autonomously alters the flight plan to track the fire progression, providing this information to the fire fighting team on the ground. We can also relay the precise fire location to other remote sensing assets with autonomous response capability such as Earth Observation-1 (EO-1)'s hyper-spectral imager to acquire the fire data.
System Characterization Results for the QuickBird Sensor
NASA Technical Reports Server (NTRS)
Holekamp, Kara; Ross, Kenton; Blonski, Slawomir
2007-01-01
An overall system characterization was performed on several DigitalGlobe' QuickBird image products by the NASA Applied Research & Technology Project Office (formerly the Applied Sciences Directorate) at the John C. Stennis Space Center. This system characterization incorporated geopositional accuracy assessments, a spatial resolution assessment, and a radiometric calibration assessment. Geopositional assessments of standard georeferenced multispectral products were obtained using an array of accurately surveyed geodetic targets evenly spaced throughout a scene. Geopositional accuracy was calculated in terms of circular error. Spatial resolution of QuickBird panchromatic imagery was characterized based on edge response measurements using edge targets and the tilted-edge technique. Relative edge response was estimated as a geometric mean of normalized edge response differences measured in two directions of image pixels at points distanced from the edge by -0.5 and 0.5 of ground sample distance. A reflectance-based vicarious calibration approach, based on ground-based measurements and radiative transfer calculations, was used to estimate at-sensor radiance. These values were compared to those measured by the sensor to determine the sensor's radiometric accuracy. All imagery analyzed was acquired between fall 2005 and spring 2006. These characterization results were compared to previous years' results to identify any temporal drifts or trends.
Phase-sensitive two-dimensional neutron shearing interferometer and Hartmann sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kevin
2015-12-08
A neutron imaging system detects both the phase shift and absorption of neutrons passing through an object. The neutron imaging system is based on either of two different neutron wavefront sensor techniques: 2-D shearing interferometry and Hartmann wavefront sensing. Both approaches measure an entire two-dimensional neutron complex field, including its amplitude and phase. Each measures the full-field, two-dimensional phase gradients and, concomitantly, the two-dimensional amplitude mapping, requiring only a single measurement.
NASA Astrophysics Data System (ADS)
Harney, Robert C.
1997-03-01
A novel methodology offering the potential for resolving two of the significant problems of implementing multisensor target recognition systems, i.e., the rational selection of a specific sensor suite and optimal allocation of requirements among sensors, is presented. Based on a sequence of conjectures (and their supporting arguments) concerning the relationship of extractable information content to recognition performance of a sensor system, a set of heuristics (essentially a reformulation of Johnson's criteria applicable to all sensor and data types) is developed. An approach to quantifying the information content of sensor data is described. Coupling this approach with the widely accepted Johnson's criteria for target recognition capabilities results in a quantitative method for comparing the target recognition ability of diverse sensors (imagers, nonimagers, active, passive, electromagnetic, acoustic, etc.). Extension to describing the performance of multiple sensors is straightforward. The application of the technique to sensor selection and requirements allocation is discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-07
... INTERNATIONAL TRADE COMMISSION [Docket No. 2895] Certain CMOS Image Sensors and Products.... International Trade Commission has received a complaint entitled Certain CMOS Image Sensors and Products... importation, and the sale within the United States after importation of certain CMOS image sensors and...
Kumar, Anand T N; Rice, William L; López, Jessica C; Gupta, Suresh; Goergen, Craig J; Bogdanov, Alexei A
2016-04-22
Enzymatic activity sensing in fluorescence lifetime (FLT) mode with "self-quenched" macromolecular near-infrared (NIR) sensors is a highly promising strategy for in vivo imaging of proteolysis. However, the mechanisms of FLT changes in such substrate-based NIR sensors have not yet been studied. We synthesized two types of sensors by linking the near-infrared fluorophore IRDye 800CW to macromolecular graft copolymers of methoxy polyethylene glycol and polylysine (MPEG-gPLL) with varying degrees of MPEGylation and studied their fragmentation induced by trypsin, elastase, plasmin and cathepsins (B,S,L,K). We determined that the efficiency of such NIR sensors in FLT mode depends on sensor composition. While MPEG-gPLL with a high degree of MPEGylation showed rapid (τ 1/2 =0.1-0.2 min) FLT increase (Δτ=0.25 ns) upon model proteinase-mediated hydrolysis in vivo , lower MPEGylation density resulted in no such FLT increase. Temperature-dependence of fluorescence de-quenching of NIR sensors pointed to a mixed dynamic/static-quenching mode of MPEG-gPLL-linked fluorophores. We further demonstrated that although the bulk of sensor-linked fluorophores were de-quenched due to the elimination of static quenching, proteolysis-mediated deletion of a fraction of short (8-10kD) negatively charged fragments of highly MPEGylated NIR sensor is the most likely event leading to a rapid FLT increase phenomenon in quenched NIR sensors. Therefore, the optimization of "built-in" dynamic quenching elements of macromolecular NIR sensors is a potential avenue for improving their response in FLT mode.
Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system
NASA Astrophysics Data System (ADS)
Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen
2018-02-01
In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors.
Ge, Xiaoliang; Theuwissen, Albert J P
2018-02-27
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models.
Temporal Noise Analysis of Charge-Domain Sampling Readout Circuits for CMOS Image Sensors †
Theuwissen, Albert J. P.
2018-01-01
This paper presents a temporal noise analysis of charge-domain sampling readout circuits for Complementary Metal-Oxide Semiconductor (CMOS) image sensors. In order to address the trade-off between the low input-referred noise and high dynamic range, a Gm-cell-based pixel together with a charge-domain correlated-double sampling (CDS) technique has been proposed to provide a way to efficiently embed a tunable conversion gain along the read-out path. Such readout topology, however, operates in a non-stationery large-signal behavior, and the statistical properties of its temporal noise are a function of time. Conventional noise analysis methods for CMOS image sensors are based on steady-state signal models, and therefore cannot be readily applied for Gm-cell-based pixels. In this paper, we develop analysis models for both thermal noise and flicker noise in Gm-cell-based pixels by employing the time-domain linear analysis approach and the non-stationary noise analysis theory, which help to quantitatively evaluate the temporal noise characteristic of Gm-cell-based pixels. Both models were numerically computed in MATLAB using design parameters of a prototype chip, and compared with both simulation and experimental results. The good agreement between the theoretical and measurement results verifies the effectiveness of the proposed noise analysis models. PMID:29495496
Multispectral Filter Arrays: Recent Advances and Practical Implementation
Lapray, Pierre-Jean; Wang, Xingbo; Thomas, Jean-Baptiste; Gouton, Pierre
2014-01-01
Thanks to some technical progress in interferencefilter design based on different technologies, we can finally successfully implement the concept of multispectral filter array-based sensors. This article provides the relevant state-of-the-art for multispectral imaging systems and presents the characteristics of the elements of our multispectral sensor as a case study. The spectral characteristics are based on two different spatial arrangements that distribute eight different bandpass filters in the visible and near-infrared area of the spectrum. We demonstrate that the system is viable and evaluate its performance through sensor spectral simulation. PMID:25407904
Time-of-flight camera via a single-pixel correlation image sensor
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua
2018-04-01
A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.
Sensors integration for smartphone navigation: performances and future challenges
NASA Astrophysics Data System (ADS)
Aicardi, I.; Dabove, P.; Lingua, A.; Piras, M.
2014-08-01
Nowadays the modern smartphones include several sensors which are usually adopted in geomatic application, as digital camera, GNSS (Global Navigation Satellite System) receivers, inertial platform, RFID and Wi-Fi systems. In this paper the authors would like to testing the performances of internal sensors (Inertial Measurement Unit, IMU) of three modern smartphones (Samsung GalaxyS4, Samsung GalaxyS5 and iPhone4) compared to external mass-market IMU platform in order to verify their accuracy levels, in terms of positioning. Moreover, the Image Based Navigation (IBN) approach is also investigated: this approach can be very useful in hard-urban environment or for indoor positioning, as alternative to GNSS positioning. IBN allows to obtain a sub-metrical accuracy, but a special database of georeferenced images (Image DataBase, IDB) is needed, moreover it is necessary to use dedicated algorithm to resizing the images which are collected by smartphone, in order to share it with the server where is stored the IDB. Moreover, it is necessary to characterize smartphone camera lens in terms of focal length and lens distortions. The authors have developed an innovative method with respect to those available today, which has been tested in a covered area, adopting a special support where all sensors under testing have been installed. Geomatic instrument have been used to define the reference trajectory, with purpose to compare this one, with the path obtained with IBN solution. First results leads to have an horizontal and vertical accuracies better than 60 cm, respect to the reference trajectories. IBN method, sensors, test and result will be described in the paper.
Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.
Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin
2017-09-01
In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.
Assessment of COTS IR image simulation tools for ATR development
NASA Astrophysics Data System (ADS)
Seidel, Heiko; Stahl, Christoph; Bjerkeli, Frode; Skaaren-Fystro, Paal
2005-05-01
Following the tendency of increased use of imaging sensors in military aircraft, future fighter pilots will need onboard artificial intelligence e.g. ATR for aiding them in image interpretation and target designation. The European Aeronautic Defence and Space Company (EADS) in Germany has developed an advanced method for automatic target recognition (ATR) which is based on adaptive neural networks. This ATR method can assist the crew of military aircraft like the Eurofighter in sensor image monitoring and thereby reduce the workload in the cockpit and increase the mission efficiency. The EADS ATR approach can be adapted for imagery of visual, infrared and SAR sensors because of the training-based classifiers of the ATR method. For the optimal adaptation of these classifiers they have to be trained with appropriate and sufficient image data. The training images must show the target objects from different aspect angles, ranges, environmental conditions, etc. Incomplete training sets lead to a degradation of classifier performance. Additionally, ground truth information i.e. scenario conditions like class type and position of targets is necessary for the optimal adaptation of the ATR method. In Summer 2003, EADS started a cooperation with Kongsberg Defence & Aerospace (KDA) from Norway. The EADS/KDA approach is to provide additional image data sets for training-based ATR through IR image simulation. The joint study aims to investigate the benefits of enhancing incomplete training sets for classifier adaptation by simulated synthetic imagery. EADS/KDA identified the requirements of a commercial-off-the-shelf IR simulation tool capable of delivering appropriate synthetic imagery for ATR development. A market study of available IR simulation tools and suppliers was performed. After that the most promising tool was benchmarked according to several criteria e.g. thermal emission model, sensor model, targets model, non-radiometric image features etc., resulting in a recommendation. The synthetic image data that are used for the investigation are generated using the recommended tool. Within the scope of this study, ATR performance on IR imagery using classifiers trained on real, synthetic and mixed image sets was evaluated. The performance of the adapted classifiers is assessed using recorded IR imagery with known ground-truth and recommendations are given for the use of COTS IR image simulation tools for ATR development.
Wide-field microscopy using microcamera arrays
NASA Astrophysics Data System (ADS)
Marks, Daniel L.; Youn, Seo Ho; Son, Hui S.; Kim, Jungsang; Brady, David J.
2013-02-01
A microcamera is a relay lens paired with image sensors. Microcameras are grouped into arrays to relay overlapping views of a single large surface to the sensors to form a continuous synthetic image. The imaged surface may be curved or irregular as each camera may independently be dynamically focused to a different depth. Microcamera arrays are akin to microprocessors in supercomputers in that both join individual processors by an optoelectronic routing fabric to increase capacity and performance. A microcamera may image ten or more megapixels and grouped into an array of several hundred, as has already been demonstrated by the DARPA AWARE Wide-Field program with multiscale gigapixel photography. We adapt gigapixel microcamera array architectures to wide-field microscopy of irregularly shaped surfaces to greatly increase area imaging over 1000 square millimeters at resolutions of 3 microns or better in a single snapshot. The system includes a novel relay design, a sensor electronics package, and a FPGA-based networking fabric. Biomedical applications of this include screening for skin lesions, wide-field and resolution-agile microsurgical imaging, and microscopic cytometry of millions of cells performed in situ.
SWIR hyperspectral imaging detector for surface residues
NASA Astrophysics Data System (ADS)
Nelson, Matthew P.; Mangold, Paul; Gomer, Nathaniel; Klueva, Oksana; Treado, Patrick
2013-05-01
ChemImage has developed a SWIR Hyperspectral Imaging (HSI) sensor which uses hyperspectral imaging for wide area surveillance and standoff detection of surface residues. Existing detection technologies often require close proximity for sensing or detecting, endangering operators and costly equipment. Furthermore, most of the existing sensors do not support autonomous, real-time, mobile platform based detection of threats. The SWIR HSI sensor provides real-time standoff detection of surface residues. The SWIR HSI sensor provides wide area surveillance and HSI capability enabled by liquid crystal tunable filter technology. Easy-to-use detection software with a simple, intuitive user interface produces automated alarms and real-time display of threat and type. The system has potential to be used for the detection of variety of threats including chemicals and illicit drug substances and allows for easy updates in the field for detection of new hazardous materials. SWIR HSI technology could be used by law enforcement for standoff screening of suspicious locations and vehicles in pursuit of illegal labs or combat engineers to support route-clearance applications- ultimately to save the lives of soldiers and civilians. In this paper, results from a SWIR HSI sensor, which include detection of various materials in bulk form, as well as residue amounts on vehicles, people and other surfaces, will be discussed.
Devadhasan, Jasmine Pramila; Kim, Sanghyo
2015-02-09
CMOS sensors are becoming a powerful tool in the biological and chemical field. In this work, we introduce a new approach on quantifying various pH solutions with a CMOS image sensor. The CMOS image sensor based pH measurement produces high-accuracy analysis, making it a truly portable and user friendly system. pH indicator blended hydrogel matrix was fabricated as a thin film to the accurate color development. A distinct color change of red, green and blue (RGB) develops in the hydrogel film by applying various pH solutions (pH 1-14). The semi-quantitative pH evolution was acquired by visual read out. Further, CMOS image sensor absorbs the RGB color intensity of the film and hue value converted into digital numbers with the aid of an analog-to-digital converter (ADC) to determine the pH ranges of solutions. Chromaticity diagram and Euclidean distance represent the RGB color space and differentiation of pH ranges, respectively. This technique is applicable to sense the various toxic chemicals and chemical vapors by situ sensing. Ultimately, the entire approach can be integrated into smartphone and operable with the user friendly manner. Copyright © 2014 Elsevier B.V. All rights reserved.
Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor
Hirvonen, Liisa M.; Suhling, Klaus
2016-01-01
Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556
Sparsity-based multi-height phase recovery in holographic microscopy
NASA Astrophysics Data System (ADS)
Rivenson, Yair; Wu, Yichen; Wang, Hongda; Zhang, Yibo; Feizi, Alborz; Ozcan, Aydogan
2016-11-01
High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.
NASA Astrophysics Data System (ADS)
Cha, B. K.; Kim, J. Y.; Kim, Y. J.; Yun, S.; Cho, G.; Kim, H. K.; Seo, C.-W.; Jeon, S.; Huh, Y.
2012-04-01
In digital X-ray imaging systems, X-ray imaging detectors based on scintillating screens with electronic devices such as charge-coupled devices (CCDs), thin-film transistors (TFT), complementary metal oxide semiconductor (CMOS) flat panel imagers have been introduced for general radiography, dental, mammography and non-destructive testing (NDT) applications. Recently, a large-area CMOS active-pixel sensor (APS) in combination with scintillation films has been widely used in a variety of digital X-ray imaging applications. We employed a scintillator-based CMOS APS image sensor for high-resolution mammography. In this work, both powder-type Gd2O2S:Tb and a columnar structured CsI:Tl scintillation screens with various thicknesses were fabricated and used as materials to convert X-ray into visible light. These scintillating screens were directly coupled to a CMOS flat panel imager with a 25 × 50 mm2 active area and a 48 μm pixel pitch for high spatial resolution acquisition. We used a W/Al mammographic X-ray source with a 30 kVp energy condition. The imaging characterization of the X-ray detector was measured and analyzed in terms of linearity in incident X-ray dose, modulation transfer function (MTF), noise-power spectrum (NPS) and detective quantum efficiency (DQE).
Sensor, signal, and image informatics - state of the art and current topics.
Lehmann, T M; Aach, T; Witte, H
2006-01-01
The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.
Autonomous vision networking: miniature wireless sensor networks with imaging technology
NASA Astrophysics Data System (ADS)
Messinger, Gioia; Goldberg, Giora
2006-09-01
The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.
Radiometry simulation within the end-to-end simulation tool SENSOR
NASA Astrophysics Data System (ADS)
Wiest, Lorenz; Boerner, Anko
2001-02-01
12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.
Simple Colorimetric Sensor for Trinitrotoluene Testing
NASA Astrophysics Data System (ADS)
Samanman, S.; Masoh, N.; Salah, Y.; Srisawat, S.; Wattanayon, R.; Wangsirikul, P.; Phumivanichakit, K.
2017-02-01
A simple operating colorimetric sensor for trinitrotoluene (TNT) determination using a commercial scanner as a captured image was designed. The sensor is based on the chemical reaction between TNT and sodium hydroxide reagent to produce the color change within 96 well plates, which observed finally, recorded using a commercial scanner. The intensity of the color change increased with increase in TNT concentration and could easily quantify the concentration of TNT by digital image analysis using the Image J free software. Under optimum conditions, the sensor provided a linear dynamic range between 0.20 and 1.00 mg mL-1(r = 0.9921) with a limit of detection of 0.10± 0.01 mg mL-1. The relative standard deviation for eight experiments for the sensitivity was 3.8%. When applied for the analysis of TNT in two soil extract samples, the concentrations were found to be non-detectable to 0.26±0.04 mg mL-1. The obtained recovery values (93-95%) were acceptable for soil samples tested.
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
NASA Astrophysics Data System (ADS)
Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja
2012-03-01
Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. Solving this issue could provide criminal investigators with the specific time a fingerprint trace was left on a surface, and therefore would enable them to link potential suspects to the time a crime took place as well as to reconstruct the sequence of events or eliminate irrelevant fingerprints to ensure privacy constraints. Transferring imaging techniques from different application areas, such as 3D image acquisition, surface measurement and chemical analysis to the domain of lifting latent biometric fingerprint traces is an upcoming trend in forensics. Such non-destructive sensor devices might help to solve the challenge of determining the age of a latent fingerprint trace, since it provides the opportunity to create time series and process them using pattern recognition techniques and statistical methods on digitized 2D, 3D and chemical data, rather than classical, contact-based capturing techniques, which alter the fingerprint trace and therefore make continuous scans impossible. In prior work, we have suggested to use a feature called binary pixel, which is a novel approach in the working field of fingerprint age determination. The feature uses a Chromatic White Light (CWL) image sensor to continuously scan a fingerprint trace over time and retrieves a characteristic logarithmic aging tendency for 2D-intensity as well as 3D-topographic images from the sensor. In this paper, we propose to combine such two characteristic aging features with other 2D and 3D features from the domains of surface measurement, microscopy, photography and spectroscopy, to achieve an increase in accuracy and reliability of a potential future age determination scheme. Discussing the feasibility of such variety of sensor devices and possible aging features, we propose a general fusion approach, which might combine promising features to a joint age determination scheme in future. We furthermore demonstrate the feasibility of the introduced approach by exemplary fusing the binary pixel features based on 2D-intensity and 3D-topographic images of the mentioned CWL sensor. We conclude that a formula based age determination approach requires very precise image data, which cannot be achieved at the moment, whereas a machine learning based classification approach seems to be feasible, if an adequate amount of features can be provided.
High speed three-dimensional laser scanner with real time processing
NASA Technical Reports Server (NTRS)
Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)
2008-01-01
A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
CMOS Active-Pixel Image Sensor With Intensity-Driven Readout
NASA Technical Reports Server (NTRS)
Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina
1996-01-01
Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.
Bayer Demosaicking with Polynomial Interpolation.
Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil
2016-08-30
Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-08-12
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.
Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping
Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca
2015-01-01
Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960
EIT-based fabric pressure sensing.
Yao, A; Yang, C L; Seo, J K; Soleimani, M
2013-01-01
This paper presents EIT-based fabric sensors that aim to provide a pressure mapping using the current carrying and voltage sensing electrodes attached to the boundary of the fabric patch. Pressure-induced shape change over the sensor area makes a change in the conductivity distribution which can be conveyed to the change of boundary current-voltage data. This boundary data is obtained through electrode measurements in EIT system. The corresponding inverse problem is to reconstruct the pressure and deformation map from the relationship between the applied current and the measured voltage on the fabric boundary. Taking advantage of EIT in providing dynamical images of conductivity changes due to pressure induced shape change, the pressure map can be estimated. In this paper, the EIT-based fabric sensor was presented for circular and rectangular sensor geometry. A stretch sensitive fabric was used in circular sensor with 16 electrodes and a pressure sensitive fabric was used in a rectangular sensor with 32 electrodes. A preliminary human test was carried out with the rectangular sensor for foot pressure mapping showing promising results.
Trends in Lightning Electrical Energy Derived from the Lightning Imaging Sensor
NASA Astrophysics Data System (ADS)
Bitzer, P. M.; Koshak, W. J.
2016-12-01
We present results detailing an emerging application of space-based measurement of lightning: the electrical energy. This is a little-used attribute of lightning data which can have applications for severe weather, lightning physics, and wildfires. In particular, we use data from the Tropical Rainfall Measuring Mission Lightning Imaging Sensor (TRMM/LIS) to find the temporal and spatial variations in the detected spectral energy density. This is used to estimate the total lightning electrical energy, following established methodologies. Results showing the trend in time of the electrical energy, as well as the distribution around the globe, will be highlighted. While flashes have been typically used in most studies, the basic scientifically-relevant measured unit by LIS is the optical group data product. This generally corresponds to a return stroke or IC pulse. We explore how the electrical energy varies per LIS group, providing an extension and comparison with previous investigations. The result is an initial climatology of this new and important application of space-based optical measurements of lightning, which can provide a baseline for future applications using the Geostationary Lightning Mapper (GLM), the European Lightning Imager (LI), and the International Space Station Lightning Imaging Sensor (ISS/LIS) instruments.
Electrodes for Semiconductor Gas Sensors
Lee, Sung Pil
2017-01-01
The electrodes of semiconductor gas sensors are important in characterizing sensors based on their sensitivity, selectivity, reversibility, response time, and long-term stability. The types and materials of electrodes used for semiconductor gas sensors are analyzed. In addition, the effect of interfacial zones and surface states of electrode–semiconductor interfaces on their characteristics is studied. This study describes that the gas interaction mechanism of the electrode–semiconductor interfaces should take into account the interfacial zone, surface states, image force, and tunneling effect. PMID:28346349
High Sensitivity Stress Sensor Based on Hybrid Materials
NASA Technical Reports Server (NTRS)
Cao, Xian-An (Inventor)
2014-01-01
A sensing device is used to detect the spatial distributions of stresses applied by physical contact with the surface of the sensor or induced by pressure, temperature gradients, and surface absorption. The sensor comprises a hybrid active layer that includes luminophores doped in a polymeric or organic host, altogether embedded in a matrix. Under an electrical bias, the sensor simultaneously converts stresses into electrical and optical signals. Among many applications, the device may be used for tactile sensing and biometric imaging.
Small SWAP 3D imaging flash ladar for small tactical unmanned air systems
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.
2015-05-01
The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.
Sensor noise camera identification: countering counter-forensics
NASA Astrophysics Data System (ADS)
Goljan, Miroslav; Fridrich, Jessica; Chen, Mo
2010-01-01
In camera identification using sensor noise, the camera that took a given image can be determined with high certainty by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim. We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled "triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of conditions. This test is then extended to the case when none of the images that the attacker used to create the fake fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's performance experimentally and investigate its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.
CMOS Image Sensors for High Speed Applications.
El-Desouki, Munir; Deen, M Jamal; Fang, Qiyin; Liu, Louis; Tse, Frances; Armstrong, David
2009-01-01
Recent advances in deep submicron CMOS technologies and improved pixel designs have enabled CMOS-based imagers to surpass charge-coupled devices (CCD) imaging technology for mainstream applications. The parallel outputs that CMOS imagers can offer, in addition to complete camera-on-a-chip solutions due to being fabricated in standard CMOS technologies, result in compelling advantages in speed and system throughput. Since there is a practical limit on the minimum pixel size (4∼5 μm) due to limitations in the optics, CMOS technology scaling can allow for an increased number of transistors to be integrated into the pixel to improve both detection and signal processing. Such smart pixels truly show the potential of CMOS technology for imaging applications allowing CMOS imagers to achieve the image quality and global shuttering performance necessary to meet the demands of ultrahigh-speed applications. In this paper, a review of CMOS-based high-speed imager design is presented and the various implementations that target ultrahigh-speed imaging are described. This work also discusses the design, layout and simulation results of an ultrahigh acquisition rate CMOS active-pixel sensor imager that can take 8 frames at a rate of more than a billion frames per second (fps).
A reference estimator based on composite sensor pattern noise for source device identification
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Chang-Tsun; Guan, Yu
2014-02-01
It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.
Towards a Unified Approach to Information Integration - A review paper on data/information fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Posse, Christian; Lei, Xingye C.
2005-10-14
Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less
a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images
NASA Astrophysics Data System (ADS)
Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei
2018-04-01
Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.
Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network
NASA Astrophysics Data System (ADS)
Ong, Jia Jan; Ang, L.-M.; Seng, K. P.
This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.
Performance Evaluation of a Biometric System Based on Acoustic Images
Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I.; Villacorta, Juan J.
2011-01-01
An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708
Scene-based nonuniformity corrections for optical and SWIR pushbroom sensors.
Leathers, Robert; Downes, Trijntje; Priest, Richard
2005-06-27
We propose and evaluate several scene-based methods for computing nonuniformity corrections for visible or near-infrared pushbroom sensors. These methods can be used to compute new nonuniformity correction values or to repair or refine existing radiometric calibrations. For a given data set, the preferred method depends on the quality of the data, the type of scenes being imaged, and the existence and quality of a laboratory calibration. We demonstrate our methods with data from several different sensor systems and provide a generalized approach to be taken for any new data set.
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Huang, Huo-Jin
1989-01-01
Data from the Special Sensor Microwave Imager/I on the DMSP satellite are used to study atmospheric moisture and cloud structure. Column-integrated water vapor and total liquid water retrievals are obtained using an algorithm based on a radiative model for brightness temperature (Wentz, 1983). The results from analyzing microwave and IR measurements are combined with independent global gridpoint analyses to study the distribution and structure of atmospheric moisture over oceanic regions.
Quantum Random Number Generation Using a Quanta Image Sensor
Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.
2016-01-01
A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698
Evaluation of Sun Glint Correction Algorithms for High-Spatial Resolution Hyperspectral Imagery
2012-09-01
ACRONYMS AND ABBREVIATIONS AISA Airborne Imaging Spectrometer for Applications AVIRIS Airborne Visible/Infrared Imaging Spectrometer BIL Band...sensor bracket mount combining Airborne Imaging Spectrometer for Applications ( AISA ) Eagle and Hawk sensors into a single imaging system (SpecTIR 2011...The AISA Eagle is a VNIR sensor with a wavelength range of approximately 400–970 nm and the AISA Hawk sensor is a SWIR sensor with a wavelength