Fusion: ultra-high-speed and IR image sensors
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.
2015-08-01
Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.
Toward one Giga frames per second--evolution of in situ storage image sensors.
Etoh, Takeharu G; Son, Dao V T; Yamada, Tetsuo; Charbon, Edoardo
2013-04-08
The ISIS is an ultra-fast image sensor with in-pixel storage. The evolution of the ISIS in the past and in the near future is reviewed and forecasted. To cover the storage area with a light shield, the conventional frontside illuminated ISIS has a limited fill factor. To achieve higher sensitivity, a BSI ISIS was developed. To avoid direct intrusion of light and migration of signal electrons to the storage area on the frontside, a cross-sectional sensor structure with thick pnpn layers was developed, and named "Tetratified structure". By folding and looping in-pixel storage CCDs, an image signal accumulation sensor, ISAS, is proposed. The ISAS has a new function, the in-pixel signal accumulation, in addition to the ultra-high-speed imaging. To achieve much higher frame rate, a multi-collection-gate (MCG) BSI image sensor architecture is proposed. The photoreceptive area forms a honeycomb-like shape. Performance of a hexagonal CCD-type MCG BSI sensor is examined by simulations. The highest frame rate is theoretically more than 1Gfps. For the near future, a stacked hybrid CCD/CMOS MCG image sensor seems most promising. The associated problems are discussed. A fine TSV process is the key technology to realize the structure.
Flexible phosphor sensors: a digital supplement or option to rigid sensors.
Glazer, Howard S
2014-01-01
An increasing number of dental practices are upgrading from film radiography to digital radiography, for reasons that include faster image processing, easier image access, better patient education, enhanced data storage, and improved office productivity. Most practices that have converted to digital technology use rigid, or direct, sensors. Another digital option is flexible phosphor sensors, also called indirect sensors or phosphor storage plates (PSPs). Flexible phosphor sensors can be advantageous for use with certain patients who may be averse to direct sensors, and they can deliver a larger image area. Additionally, sensor cost for replacement PSPs is considerably lower than for hard sensors. As such, flexible phosphor sensors appear to be a viable supplement or option to direct sensors.
Radiographic endodontic working length estimation: comparison of three digital image receptors.
Athar, Anas; Angelopoulos, Christos; Katz, Jerald O; Williams, Karen B; Spencer, Paulette
2008-10-01
This in vitro study was conducted to evaluate the accuracy of the Schick wireless image receptor compared with 2 other types of digital image receptors for measuring the radiographic landmarks pertinent to endodontic treatment. Fourteen human cadaver mandibles with retained molars were selected. A fine endodontic file (#10) was introduced into the canal at random distances from the apex and at the apex of the tooth; images were made with 3 different #2-size image receptors: DenOptix storage phosphor plates, Gendex CCD sensor (wired), and Schick CDR sensor (wireless). Six raters viewed the images for identification of the radiographic apex of the tooth and the tip of a fine (#10) endodontic file. Inter-rater reliability was also assessed. Repeated-measures analysis of variance revealed a significant main effect for the type of image receptor. Raters' error in identifying structures of interest was significantly higher for Denoptix storage phosphor plates, whereas the least error was noted with the Schick CDR sensor. A significant interaction effect was observed for rater and type of image receptor used, but this effect contributed only 6% (P < .01; eta(2) = 0.06) toward the outcome of the results. Schick CDR wireless sensor may be preferable to other solid-state sensors, because there is no cable connecting the sensor to the computer. Further testing of this sensor for other diagnostic tasks is recommended, as well as evaluation of patient acceptance.
NASA Technical Reports Server (NTRS)
Janesick, James R. (Inventor); Elliott, Stythe T. (Inventor)
1989-01-01
A method for promoting quantum efficiency (QE) of a CCD imaging sensor for UV, far UV and low energy x-ray wavelengths by overthinning the back side beyond the interface between the substrate and the photosensitive semiconductor material, and flooding the back side with UV prior to using the sensor for imaging. This UV flooding promotes an accumulation layer of positive states in the oxide film over the thinned sensor to greatly increase QE for either frontside or backside illumination. A permanent or semipermanent image (analog information) may be stored in a frontside SiO.sub.2 layer over the photosensitive semiconductor material using implanted ions for a permanent storage and intense photon radiation for a semipermanent storage. To read out this stored information, the gate potential of the CCD is biased more negative than that used for normal imaging, and excess charge current thus produced through the oxide is integrated in the pixel wells for subsequent readout by charge transfer from well to well in the usual manner.
Digital Photography and Its Impact on Instruction.
ERIC Educational Resources Information Center
Lantz, Chris
Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
NASA Astrophysics Data System (ADS)
Rossi, Marco; Pierron, Fabrice; Forquin, Pascal
2014-02-01
Ultra-high speed (UHS) cameras allow us to acquire images typically up to about 1 million frames s-1 for a full spatial resolution of the order of 1 Mpixel. Different technologies are available nowadays to achieve these performances, an interesting one is the so-called in situ storage image sensor architecture where the image storage is incorporated into the sensor chip. Such an architecture is all solid state and does not contain movable devices as occurs, for instance, in the rotating mirror UHS cameras. One of the disadvantages of this system is the low fill factor (around 76% in the vertical direction and 14% in the horizontal direction) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field strain measurements. The aim of this paper is to develop an experimental procedure to thoroughly characterize the performance of such kinds of cameras in full-field deformation measurement and identify the best operative conditions which minimize the measurement errors. A series of tests was performed on a Shimadzu HPV-1 UHS camera first using uniform scenes and then grids under rigid movements. The grid method was used as full-field measurement optical technique here. From these tests, it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements.
Study on parallel and distributed management of RS data based on spatial database
NASA Astrophysics Data System (ADS)
Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin
2009-10-01
With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.
Study on parallel and distributed management of RS data based on spatial data base
NASA Astrophysics Data System (ADS)
Chen, Yingbiao; Qian, Qinglan; Liu, Shijin
2006-12-01
With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
A generic FPGA-based detector readout and real-time image processing board
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant
2016-07-01
For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Nikzad, Shouleh (Inventor); Monacos, Steve P. (Inventor); Hoenk, Michael E. (Inventor)
2013-01-01
Apparatuses and methods are disclosed that create a synthetic fovea in order to identify and highlight interesting portions of an image for further processing and rapid response. Synthetic foveal imaging implements a parallel processing architecture that uses reprogrammable logic to implement embedded, distributed, real-time foveal image processing from different sensor types while simultaneously allowing for lossless storage and retrieval of raw image data. Real-time, distributed, adaptive processing of multi-tap image sensors with coordinated processing hardware used for each output tap is enabled. In mosaic focal planes, a parallel-processing network can be implemented that treats the mosaic focal plane as a single ensemble rather than a set of isolated sensors. Various applications are enabled for imaging and robotic vision where processing and responding to enormous amounts of data quickly and efficiently is important.
Charge shielding in the In-situ Storage Image Sensor for a vertex detector at the ILC
NASA Astrophysics Data System (ADS)
Zhang, Z.; Stefanov, K. D.; Bailey, D.; Banda, Y.; Buttar, C.; Cheplakov, A.; Cussans, D.; Damerell, C.; Devetak, E.; Fopma, J.; Foster, B.; Gao, R.; Gillman, A.; Goldstein, J.; Greenshaw, T.; Grimes, M.; Halsall, R.; Harder, K.; Hawes, B.; Hayrapetyan, K.; Heath, H.; Hillert, S.; Jackson, D.; Pinto Jayawardena, T.; Jeffery, B.; John, J.; Johnson, E.; Kundu, N.; Laing, A.; Lastovicka, T.; Lau, W.; Li, Y.; Lintern, A.; Lynch, C.; Mandry, S.; Martin, V.; Murray, P.; Nichols, A.; Nomerotski, A.; Page, R.; Parkes, C.; Perry, C.; O'Shea, V.; Sopczak, A.; Tabassam, H.; Thomas, S.; Tikkanen, T.; Velthuis, J.; Walsh, R.; Woolliscroft, T.; Worm, S.
2009-08-01
The Linear Collider Flavour Identification (LCFI) collaboration has successfully developed the first prototype of a novel particle detector, the In-situ Storage Image Sensor (ISIS). This device ideally suits the challenging requirements for the vertex detector at the future International Linear Collider (ILC), combining the charge storing capabilities of the Charge-Coupled Devices (CCD) with readout commonly used in CMOS imagers. The ISIS avoids the need for high-speed readout and offers low power operation combined with low noise, high immunity to electromagnetic interference and increased radiation hardness compared to typical CCDs. The ISIS is one of the most promising detector technologies for vertexing at the ILC. In this paper we describe the measurements on the charge-shielding properties of the p-well, which is used to protect the storage register from parasitic charge collection and is at the core of device's operation. We show that the p-well can suppress the parasitic charge collection by almost two orders of magnitude, satisfying the requirements for the application.
NASA Technical Reports Server (NTRS)
Ando, K. J.
1971-01-01
Description of the performance of the silicon diode array vidicon - an imaging sensor which possesses wide spectral response, high quantum efficiency, and linear response. These characteristics, in addition to its inherent ruggedness, simplicity, and long-term stability and operating life make this device potentially of great usefulness for ground-base and spaceborne planetary and stellar imaging applications. However, integration and charged storage for periods greater than approximately five seconds are not possible at room temperature because of diode saturation from dark current buildup. Since dark current can be reduced by cooling, measurements were made in the range from -65 to 25 C. Results are presented on the extension of integration, storage, and slow scan capabilities achievable by cooling. Integration times in excess of 20 minutes were achieved at the lowest temperatures. The measured results are compared with results obtained with other types of sensors and the advantages of the silicon diode array vidicon for imaging applications are discussed.
NASA Astrophysics Data System (ADS)
Li, Zhuo; Seo, Min-Woong; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2016-04-01
This paper presents the design and implementation of a time-resolved CMOS image sensor with a high-speed lateral electric field modulation (LEFM) gating structure for time domain fluorescence lifetime measurement. Time-windowed signal charge can be transferred from a pinned photodiode (PPD) to a pinned storage diode (PSD) by turning on a pair of transfer gates, which are situated beside the channel. Unwanted signal charge can be drained from the PPD to the drain by turning on another pair of gates. The pixel array contains 512 (V) × 310 (H) pixels with 5.6 × 5.6 µm2 pixel size. The imager chip was fabricated using 0.11 µm CMOS image sensor process technology. The prototype sensor has a time response of 150 ps at 374 nm. The fill factor of the pixels is 5.6%. The usefulness of the prototype sensor is demonstrated for fluorescence lifetime imaging through simulation and measurement results.
Multi-Sensory Features for Personnel Detection at Border Crossings
2011-07-08
challenging problem. Video sensors consume high amounts of power and require a large volume for storage. Hence, it is preferable to use non- imaging sensors...temporal distribution of gait beats [5]. At border crossings, animals such as mules, horses, or donkeys are often known to carry loads. Animal hoof...field, passive ultrasonic, sonar, and both infrared and visi- ble video sensors. Each sensor suite is placed along the path with a spacing of 40 to
Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc
NASA Astrophysics Data System (ADS)
Becker, Peter; Plesea, Lucian; Maurer, Thomas
2016-06-01
The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.
Damage detection in hazardous waste storage tank bottoms using ultrasonic guided waves
NASA Astrophysics Data System (ADS)
Cobb, Adam C.; Fisher, Jay L.; Bartlett, Jonathan D.; Earnest, Douglas R.
2018-04-01
Detecting damage in storage tanks is performed commercially using a variety of techniques. The most commonly used inspection technologies are magnetic flux leakage (MFL), conventional ultrasonic testing (UT), and leak testing. MFL and UT typically involve manual or robotic scanning of a sensor along the metal surfaces to detect cracks or corrosion wall loss. For inspection of the tank bottom, however, the storage tank is commonly emptied to allow interior access for the inspection system. While there are costs associated with emptying a storage tank for inspection that can be justified in some scenarios, there are situations where emptying the tank is impractical. Robotic, submersible systems have been developed for inspecting these tanks, but there are some storage tanks whose contents are so hazardous that even the use of these systems is untenable. Thus, there is a need to develop an inspection strategy that does not require emptying the tank or insertion of the sensor system into the tank. This paper presents a guided wave system for inspecting the bottom of double-shelled storage tanks (DSTs), with the sensor located on the exterior side-wall of the vessel. The sensor used is an electromagnetic acoustic transducer (EMAT) that generates and receives shear-horizontal guided plate waves using magnetostriction principles. The system operates by scanning the sensor around the circumference of the storage tank and sending guided waves into the tank bottom at regular intervals. The data from multiple locations are combined using the synthetic aperture focusing technique (SAFT) to create a color-mapped image of the vessel thickness changes. The target application of the system described is inspection of DSTs located at the Hanford site, which are million-gallon vessels used to store nuclear waste. Other vessels whose exterior walls are accessible would also be candidates for inspection using the described approach. Experimental results are shown from tests on multiple mockups of the DSTs being used to develop the sensor system.
NASA Astrophysics Data System (ADS)
Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart
2016-11-01
The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.
A data-management system using sensor technology and wireless devices for port security
NASA Astrophysics Data System (ADS)
Saldaña, Manuel; Rivera, Javier; Oyola, Jose; Manian, Vidya
2014-05-01
Sensor technologies such as infrared sensors and hyperspectral imaging, video camera surveillance are proven to be viable in port security. Drawing from sources such as infrared sensor data, digital camera images and processed hyperspectral images, this article explores the implementation of a real-time data delivery system. In an effort to improve the manner in which anomaly detection data is delivered to interested parties in port security, this system explores how a client-server architecture can provide protected access to data, reports, and device status. Sensor data and hyperspectral image data will be kept in a monitored directory, where the system will link it to existing users in the database. Since this system will render processed hyperspectral images that are dynamically added to the server - which often occupy a large amount of space - the resolution of these images is trimmed down to around 1024×768 pixels. Changes that occur in any image or data modification that originates from any sensor will trigger a message to all users that have a relation with the aforementioned. These messages will be sent to the corresponding users through automatic email generation and through a push notification using Google Cloud Messaging for Android. Moreover, this paper presents the complete architecture for data reception from the sensors, processing, storage and discusses how users of this system such as port security personnel can use benefit from the use of this service to receive secure real-time notifications if their designated sensors have detected anomalies and/or have remote access to results from processed hyperspectral imagery relevant to their assigned posts.
REMOTE SENSING AND GIS WETLANDS
Learn how photographs and computer sensor generated images can illustrate conditions of hydrology, extent, change over time, and impact of events such as hurricanes and tornados. Other topics include: information storage and modeling, and evaluation of wetlands for managing reso...
Develop an piezoelectric sensing based on SHM system for nuclear dry storage system
NASA Astrophysics Data System (ADS)
Ma, Linlin; Lin, Bin; Sun, Xiaoyi; Howden, Stephen; Yu, Lingyu
2016-04-01
In US, there are over 1482 dry cask storage system (DCSS) in use storing 57,807 fuel assemblies. Monitoring is necessary to determine and predict the degradation state of the systems and structures. Therefore, nondestructive monitoring is in urgent need and must be integrated into the fuel cycle to quantify the "state of health" for the safe operation of nuclear power plants (NPP) and radioactive waste storage systems (RWSS). Innovative approaches are desired to evaluate the degradation and damage of used fuel containers under extended storage. Structural health monitoring (SHM) is an emerging technology that uses in-situ sensory system to perform rapid nondestructive detection of structural damage as well as long-term integrity monitoring. It has been extensively studied in aerospace engineering over the past two decades. This paper presents the development of a SHM and damage detection methodology based on piezoelectric sensors technologies for steel canisters in nuclear dry cask storage system. Durability and survivability of piezoelectric sensors under temperature influence are first investigated in this work by evaluating sensor capacitance and electromechanical admittance. Toward damage detection, the PES are configured in pitch catch setup to transmit and receive guided waves in plate-like structures. When the inspected structure has damage such as a surface defect, the incident guided waves will be reflected or scattered resulting in changes in the wave measurements. Sparse array algorithm is developed and implemented using multiple sensors to image the structure. The sparse array algorithm is also evaluated at elevated temperature.
Multispectral Imaging in Cultural Heritage Conservation
NASA Astrophysics Data System (ADS)
Del Pozo, S.; Rodríguez-Gonzálvez, P.; Sánchez-Aparicio, L. J.; Muñoz-Nieto, A.; Hernández-López, D.; Felipe-García, B.; González-Aguilera, D.
2017-08-01
This paper sums up the main contribution derived from the thesis entitled "Multispectral imaging for the analysis of materials and pathologies in civil engineering, constructions and natural spaces" awarded by CIPA-ICOMOS for its connection with the preservation of Cultural Heritage. This thesis is framed within close-range remote sensing approaches by the fusion of sensors operating in the optical domain (visible to shortwave infrared spectrum). In the field of heritage preservation, multispectral imaging is a suitable technique due to its non-destructive nature and its versatility. It combines imaging and spectroscopy to analyse materials and land covers and enables the use of a variety of different geomatic sensors for this purpose. These sensors collect both spatial and spectral information for a given scenario and a specific spectral range, so that, their smaller storage units save the spectral properties of the radiation reflected by the surface of interest. The main goal of this research work is to characterise different construction materials as well as the main pathologies of Cultural Heritage elements by combining active and passive sensors recording data in different ranges. Conclusions about the suitability of each type of sensor and spectral range are drawn in relation to each particular case study and damage. It should be emphasised that results are not limited to images, since 3D intensity data from laser scanners can be integrated with 2D data from passive sensors obtaining high quality products due to the added value that metric brings to multispectral images.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
Nobukawa, Teruyoshi; Nomura, Takanori
2016-09-05
A holographic data storage system using digital holography is proposed to record and retrieve multilevel complex amplitude data pages. Digital holographic techniques are capable of modulating and detecting complex amplitude distribution using current electronic devices. These techniques allow the development of a simple, compact, and stable holographic storage system that mainly consists of a single phase-only spatial light modulator and an image sensor. As a proof-of-principle experiment, complex amplitude data pages with binary amplitude and four-level phase are recorded and retrieved. Experimental results show the feasibility of the proposed holographic data storage system.
High-resolution CCD imaging alternatives
NASA Astrophysics Data System (ADS)
Brown, D. L.; Acker, D. E.
1992-08-01
High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.
State of the art in video system performance
NASA Technical Reports Server (NTRS)
Lewis, Michael J.
1990-01-01
The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.
Computer vision barrel inspection
NASA Astrophysics Data System (ADS)
Wolfe, William J.; Gunderson, James; Walworth, Matthew E.
1994-02-01
One of the Department of Energy's (DOE) ongoing tasks is the storage and inspection of a large number of waste barrels containing a variety of hazardous substances. Martin Marietta is currently contracted to develop a robotic system -- the Intelligent Mobile Sensor System (IMSS) -- for the automatic monitoring and inspection of these barrels. The IMSS is a mobile robot with multiple sensors: video cameras, illuminators, laser ranging and barcode reader. We assisted Martin Marietta in this task, specifically in the development of image processing algorithms that recognize and classify the barrel labels. Our subsystem uses video images to detect and locate the barcode, so that the barcode reader can be pointed at the barcode.
Full-field acoustomammography using an acousto-optic sensor.
Sandhu, J S; Schmidt, R A; La Rivière, P J
2009-06-01
In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed.
Full-field acoustomammography using an acousto-optic sensor
Sandhu, J. S.; Schmidt, R. A.; La Rivière, P. J.
2009-01-01
In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed. PMID:19610321
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
Seo, Min-Woong; Kawahito, Shoji
2017-12-01
A large full well capacity (FWC) for wide signal detection range and low temporal random noise for high sensitivity lock-in pixel CMOS image sensor (CIS) embedded with two in-pixel storage diodes (SDs) has been developed and presented in this paper. For fast charge transfer from photodiode to SDs, a lateral electric field charge modulator (LEFM) is used for the developed lock-in pixel. As a result, the time-resolved CIS achieves a very large SD-FWC of approximately 7ke-, low temporal random noise of 1.2e-rms at 20 fps with true correlated double sampling operation and fast intrinsic response less than 500 ps at 635 nm. The proposed imager has an effective pixel array of and a pixel size of . The sensor chip is fabricated by Dongbu HiTek 1P4M 0.11 CIS process.
Battery management system with distributed wireless sensors
Farmer, Joseph C.; Bandhauer, Todd M.
2016-02-23
A system for monitoring parameters of an energy storage system having a multiplicity of individual energy storage cells. A radio frequency identification and sensor unit is connected to each of the individual energy storage cells. The radio frequency identification and sensor unit operates to sense the parameter of each individual energy storage cell and provides radio frequency transmission of the parameters of each individual energy storage cell. A management system monitors the radio frequency transmissions from the radio frequency identification and sensor units for monitoring the parameters of the energy storage system.
Real-time terrain storage generation from multiple sensors towards mobile robot operation interface.
Song, Wei; Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun; Um, Kyhyun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
Cho, Seoungjae; Xi, Yulong; Cho, Kyungeun
2014-01-01
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots. PMID:25101321
Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing
NASA Astrophysics Data System (ADS)
Sedlar, Michael F.; Griffith, Jerry A.
1988-07-01
This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.
Compact and mobile high resolution PET brain imager
Majewski, Stanislaw [Yorktown, VA; Proffitt, James [Newport News, VA
2011-02-08
A brain imager includes a compact ring-like static PET imager mounted in a helmet-like structure. When attached to a patient's head, the helmet-like brain imager maintains the relative head-to-imager geometry fixed through the whole imaging procedure. The brain imaging helmet contains radiation sensors and minimal front-end electronics. A flexible mechanical suspension/harness system supports the weight of the helmet thereby allowing for patient to have limited movements of the head during imaging scans. The compact ring-like PET imager enables very high resolution imaging of neurological brain functions, cancer, and effects of trauma using a rather simple mobile scanner with limited space needs for use and storage.
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
Energy storage management system with distributed wireless sensors
Farmer, Joseph C.; Bandhauer, Todd M.
2015-12-08
An energy storage system having a multiple different types of energy storage and conversion devices. Each device is equipped with one or more sensors and RFID tags to communicate sensor information wirelessly to a central electronic management system, which is used to control the operation of each device. Each device can have multiple RFID tags and sensor types. Several energy storage and conversion devices can be combined.
Biomedical wellness monitoring system based upon molecular markers
NASA Astrophysics Data System (ADS)
Ingram, Whitney
2012-06-01
We wish to assist caretakers with a sensor monitoring systems for tracking the physiological changes of homealone patients. One goal is seeking biomarkers and modern imaging sensors like stochastic optical reconstruction microscopy (STORM), which has achieved visible imaging at the nano-scale range. Imaging techniques like STORM can be combined with a fluorescent functional marker in a system to capture the early transformation signs from wellness to illness. By exploiting both microscopic knowledge of genetic pre-disposition and the macroscopic influence of epigenetic factors we hope to target these changes remotely. We adopt dual spectral infrared imaging for blind source separation (BSS) to detect angiogenesis changes and use laser speckle imaging for hypertension blood flow monitoring. Our design hypothesis for the monitoring system is guided by the user-friendly, veteran-preferred "4-Non" principles (noninvasive, non-contact, non-tethered, non-stop-to-measure) and by the NIH's "4Ps" initiatives (predictive, personalized, preemptive, and participatory). We augment the potential storage system with the recent know-how of video Compressive Sampling (CSp) from surveillance cameras. In CSp only major changes are saved, which reduces the manpower cost of caretakers and medical analysts. This CSp algorithm is based on smart associative memory (AM) matrix storage: change features and detailed scenes are written by the outer-product and read by the inner product without the usual Harsh index for image searching. From this approach, we attempt to design an effective household monitoring approach to save healthcare costs and maintain the quality of life of seniors.
NASA Astrophysics Data System (ADS)
Holland, S. Douglas
1992-09-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor)
1992-01-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
Two different sensor technologies and their properties were analyzed. he nalysis simulated a leak which occurs from an underground storage tank. igaro gas sensors and the Adsistor gas sensor were tested in simulated underground storage tank nvironments using the Carnegie Mellon R...
Micromachined Chip Scale Thermal Sensor for Thermal Imaging.
Shekhawat, Gajendra S; Ramachandran, Srinivasan; Jiryaei Sharahi, Hossein; Sarkar, Souravi; Hujsak, Karl; Li, Yuan; Hagglund, Karl; Kim, Seonghwan; Aden, Gary; Chand, Ami; Dravid, Vinayak P
2018-02-27
The lateral resolution of scanning thermal microscopy (SThM) has hitherto never approached that of mainstream atomic force microscopy, mainly due to poor performance of the thermal sensor. Herein, we report a nanomechanical system-based thermal sensor (thermocouple) that enables high lateral resolution that is often required in nanoscale thermal characterization in a wide range of applications. This thermocouple-based probe technology delivers excellent lateral resolution (∼20 nm), extended high-temperature measurements >700 °C without cantilever bending, and thermal sensitivity (∼0.04 °C). The origin of significantly improved figures-of-merit lies in the probe design that consists of a hollow silicon tip integrated with a vertically oriented thermocouple sensor at the apex (low thermal mass) which interacts with the sample through a metallic nanowire (50 nm diameter), thereby achieving high lateral resolution. The efficacy of this approach to SThM is demonstrated by imaging embedded metallic nanostructures in silica core-shell, metal nanostructures coated with polymer films, and metal-polymer interconnect structures. The nanoscale pitch and extremely small thermal mass of the probe promise significant improvements over existing methods and wide range of applications in several fields including semiconductor industry, biomedical imaging, and data storage.
Water Catchment and Storage Monitoring
NASA Astrophysics Data System (ADS)
Bruenig, Michael; Dunbabin, Matt; Moore, Darren
2010-05-01
Sensors and Sensor Networks technologies provide the means for comprehensive understanding of natural processes in the environment by radically increasing the availability of empirical data about the natural world. This step change is achieved through a dramatic reduction in the cost of data acquisition and many orders of magnitude increase in the spatial and temporal granularity of measurements. Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) is undertaking a strategic research program developing wireless sensor network technology for environmental monitoring. As part of this research initiative, we are engaging with government agencies to densely monitor water catchments and storages, thereby enhancing understanding of the environmental processes that affect water quality. In the Gold Coast hinterland in Queensland, Australia, we are building sensor networks to monitor restoration of rainforest within the catchment, and to monitor methane flux release and water quality in the water storages. This poster will present our ongoing work in this region of eastern Australia. The Springbrook plateau in the Gold Coast hinterland lies within a World Heritage listed area, has uniquely high rainfall, hosts a wide range of environmental gradients, and forms part of the catchment for Gold Coast's water storages. Parts of the plateau are being restored from agricultural grassland to native rainforest vegetation. Since April 2008, we have had a 10-node, multi-hop sensor network deployed there to monitor microclimate variables. This network will be expanded to 50-nodes in February 2010, and to around 200-nodes and 1000 sensors by mid-2011, spread over an area of approximately 0.8 square kilometers. The extremely dense microclimate sensing will enhance knowledge of the environmental factors that enhance or inhibit the regeneration of native rainforest. The final network will also include nodes with acoustic and image sensing capability for monitoring higher level parameters such as fauna diversity. The regenerating rainforest environment presents a number of interesting challenges for wireless sensor networks related to energy harvesting and to reliable low-power wireless communications through dense and wet vegetation. Located downstream from the Springbrook plateau, the Little Nerang and Hinze dams are the two major water supply storages for the Gold Coast region. In September 2009 we fitted methane, light, wind, and sonar sensors to our autonomous electric boat platform and successfully demonstrated autonomous collection of methane flux release data on Little Nerang Dam. Sensor and boat status data were relayed back to a human operator on the shore of the dam via a small network of our Fleck™ nodes. The network also included 4 floating nodes each fitted with a string of 6 temperature sensors for profiling temperature at different water depths. We plan to expand the network further during 2010 to incorporate floating methane nodes, additional temperature sensing nodes, as well as land-based microclimate nodes. The overall monitoring system will provide significant data to understand the connected catchment-to-storage system and will provide continuous data to monitor and understand change trends within this world heritage area.
Application of the high resolution return beam vidicon
NASA Technical Reports Server (NTRS)
Cantella, M. J.
1977-01-01
The Return Beam Vidicon (RBV) is a high-performance electronic image sensor and electrical storage component. It can accept continuous or discrete exposures. Information can be read out with a single scan or with many repetitive scans for either signal processing or display. Resolution capability is 10,000 TV lines/height, and at 100 lp/mm, performance matches or exceeds that of film, particularly with low-contrast imagery. Electronic zoom can be employed effectively for image magnification and data compression. The high performance and flexibility of the RBV permit wide application in systems for reconnaissance, scan conversion, information storage and retrieval, and automatic inspection and test. This paper summarizes the characteristics and performance parameters of the RBV and cites examples of feasible applications.
X-ray imaging using digital cameras
NASA Astrophysics Data System (ADS)
Winch, Nicola M.; Edgar, Andrew
2012-03-01
The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.
Neuromorphic vision sensors and preprocessors in system applications
NASA Astrophysics Data System (ADS)
Kramer, Joerg; Indiveri, Giacomo
1998-09-01
A partial review of neuromorphic vision sensors that are suitable for use in autonomous systems is presented. Interfaces are being developed to multiplex the high- dimensional output signals of arrays of such sensors and to communicate them in standard formats to off-chip devices for higher-level processing, actuation, storage and display. Alternatively, on-chip processing stages may be implemented to extract sparse image parameters, thereby obviating the need for multiplexing. Autonomous robots are used to test neuromorphic vision chips in real-world environments and to explore the possibilities of data fusion from different sensing modalities. Examples of autonomous mobile systems that use neuromorphic vision chips for line tracking and optical flow matching are described.
A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity
Zhang, Fan; Niu, Hanben
2016-01-01
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 107 when illuminated by a 405-nm diode laser and 1/1.4 × 104 when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e− rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena. PMID:27367699
A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity.
Zhang, Fan; Niu, Hanben
2016-06-29
In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 10⁷ when illuminated by a 405-nm diode laser and 1/1.4 × 10⁴ when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e(-) rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena.
Digital imaging for dental caries.
Wenzel, A
2000-04-01
Laboratory studies show that digital intraoral radiography systems are as accurate as dental film for the detection of caries when a good-quality image is obtained, although more re-takes might be necessary because of positioning errors with the digital systems, particularly the charge-coupled device sensors. The phosphor plate is more comfortable for the patient than nondigital systems, and the dose can be further reduced with the storage phosphors. Cross-contamination does not pose a problem with digital systems if simple hygiene procedures are observed.
NASA Astrophysics Data System (ADS)
Moser, Eric K.
2016-05-01
LITENING is an airborne system-of-systems providing long-range imaging, targeting, situational awareness, target tracking, weapon guidance, and damage assessment, incorporating a laser designator and laser range finders, as well as non-thermal and thermal imaging systems, with multi-sensor boresight. Robust operation is at a premium, and subsystems are partitioned to modular, swappable line-replaceable-units (LRUs) and shop-replaceable-units (SRUs). This presentation will explore design concepts for sensing, data storage, and presentation of imagery associated with the LITENING targeting pod. The "eyes" of LITENING are the electro-optic sensors. Since the initial LITENING II introduction to the US market in the late 90s, as the program has evolved and matured, a series of spiral functional improvements and sensor upgrades have been incorporated. These include laser-illuminated imaging, and more recently, color sensing. While aircraft displays are outside of the LITENING system, updates to the available viewing modules have also driven change, and resulted in increasingly effective ways of utilizing the targeting system. One of the latest LITENING spiral upgrades adds a new capability to display and capture visible-band color imagery, using new sensors. This is an augmentation to the system's existing capabilities, which operate over a growing set of visible and invisible colors, infrared bands, and laser line wavelengths. A COTS visible-band camera solution using a CMOS sensor has been adapted to meet the particular needs associated with the airborne targeting use case.
NASA Astrophysics Data System (ADS)
Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.
2012-07-01
Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.
Recent Progress of Self-Powered Sensing Systems for Wearable Electronics.
Lou, Zheng; Li, La; Wang, Lili; Shen, Guozhen
2017-12-01
Wearable/flexible electronic sensing systems are considered to be one of the key technologies in the next generation of smart personal electronics. To realize personal portable devices with mobile electronics application, i.e., wearable electronic sensors that can work sustainably and continuously without an external power supply are highly desired. The recent progress and advantages of wearable self-powered electronic sensing systems for mobile or personal attachable health monitoring applications are presented. An overview of various types of wearable electronic sensors, including flexible tactile sensors, wearable image sensor array, biological and chemical sensor, temperature sensors, and multifunctional integrated sensing systems is provided. Self-powered sensing systems with integrated energy units are then discussed, separated as energy harvesting self-powered sensing systems, energy storage integrated sensing systems, and all-in-on integrated sensing systems. Finally, the future perspectives of self-powered sensing systems for wearable electronics are discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The Quanta Image Sensor: Every Photon Counts
Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel
2016-01-01
The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926
Experimental teaching and training system based on volume holographic storage
NASA Astrophysics Data System (ADS)
Jiang, Zhuqing; Wang, Zhe; Sun, Chan; Cui, Yutong; Wan, Yuhong; Zou, Rufei
2017-08-01
The experiment of volume holographic storage for teaching and training the practical ability of senior students in Applied Physics is introduced. The students can learn to use advanced optoelectronic devices and the automatic control means via this experiment, and further understand the theoretical knowledge of optical information processing and photonics disciplines that have been studied in some courses. In the experiment, multiplexing holographic recording and readout is based on Bragg selectivity of volume holographic grating, in which Bragg diffraction angle is dependent on grating-recording angel. By using different interference angle between reference and object beams, the holograms can be recorded into photorefractive crystal, and then the object images can be read out from these holograms via angular addressing by using the original reference beam. In this system, the experimental data acquisition and the control of the optoelectronic devices, such as the shutter on-off, image loaded in SLM and image acquisition of a CCD sensor, are automatically realized by using LabVIEW programming.
Large-area, flexible imaging arrays constructed by light-charge organic memories
Zhang, Lei; Wu, Ti; Guo, Yunlong; Zhao, Yan; Sun, Xiangnan; Wen, Yugeng; Yu, Gui; Liu, Yunqi
2013-01-01
Existing organic imaging circuits, which offer attractive benefits of light weight, low cost and flexibility, are exclusively based on phototransistor or photodiode arrays. One shortcoming of these photo-sensors is that the light signal should keep invariant throughout the whole pixel-addressing and reading process. As a feasible solution, we synthesized a new charge storage molecule and embedded it into a device, which we call light-charge organic memory (LCOM). In LCOM, the functionalities of photo-sensor and non-volatile memory are integrated. Thanks to the deliberate engineering of electronic structure and self-organization process at the interface, 92% of the stored charges, which are linearly controlled by the quantity of light, retain after 20000 s. The stored charges can also be non-destructively read and erased by a simple voltage program. These results pave the way to large-area, flexible imaging circuits and demonstrate a bright future of small molecular materials in non-volatile memory. PMID:23326636
NASA Astrophysics Data System (ADS)
Näthe, Paul; Becker, Rolf
2014-05-01
Soil moisture and plant available water are important environmental parameters that affect plant growth and crop yield. Hence, they are significant parameters for vegetation monitoring and precision agriculture. However, validation through ground-based soil moisture measurements is necessary for accessing soil moisture, plant canopy temperature, soil temperature and soil roughness with airborne hyperspectral imaging systems in a corresponding hyperspectral imaging campaign as a part of the INTERREG IV A-Project SMART INSPECTORS. At this point, commercially available sensors for matric potential, plant available water and volumetric water content are utilized for automated measurements with smart sensor nodes which are developed on the basis of open-source 868MHz radio modules, featuring a full-scale microcontroller unit that allows an autarkic operation of the sensor nodes on batteries in the field. The generated data from each of these sensor nodes is transferred wirelessly with an open-source protocol to a central node, the so-called "gateway". This gateway collects, interprets and buffers the sensor readings and, eventually, pushes the data-time series onto a server-based database. The entire data processing chain from the sensor reading to the final storage of data-time series on a server is realized with open-source hardware and software in such a way that the recorded data can be accessed from anywhere through the internet. It will be presented how this open-source based wireless sensor network is developed and specified for the application of ground truthing. In addition, the system's perspectives and potentials with respect to usability and applicability for vegetation monitoring and precision agriculture shall be pointed out. Regarding the corresponding hyperspectral imaging campaign, results from ground measurements will be discussed in terms of their contributing aspects to the remote sensing system. Finally, the significance of the wireless sensor network for the application of ground truthing shall be determined.
A new photoconductor imaging system for digital radiography.
de Monts, H; Beaumont, F
1989-01-01
Amorphous selenium is a material often used in the x-ray imaging system. The main application is in xeroradiography where the structure of the sensor is a layer of selenium on a conductive substrate. The signal is a charge density on the surface which is revealed by a toner or by electrostatic probe for digitalization. In the system described here, the sensor structure is different for the sensor is covered by an electrode, a thin layer of metal, which gives another interface. The reading system needs the scanning of a light beam and the resolution power depends on the size of the beam. It is easier to scan a light beam than electrostatic probes so a more compact system can be realized. In the process, there are two phases: the storage and the reading. The time spent between the two phases reduces the quality of the image, and an in situ reading system, integrated to the radiographic machine will be, for this reason, more efficient. Also, the sensor needs good memory effect. One has investigated different sensors based on a structure of a thin photoconductive layer between two electrodes to find a memory effect. We have already seen this phenomena in the Bi12 SiO20 (B. Richard, "Contribution à l'étude d'un procédé d'imagerie radiologique utilisant le photoconducteur BO12 SiO20," Ph.D. thesis, Paris, 1987). In amorphous selenium with some dopants and some type of metallic contact, the memory effect is important enough to realize a system. With 2 X 2 cm samples, a complete x-ray digital imaging system has been built.(ABSTRACT TRUNCATED AT 250 WORDS)
Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs
NASA Astrophysics Data System (ADS)
Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.
2017-08-01
In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.
Satellite Observation Systems for Polar Climate Change Studies
NASA Technical Reports Server (NTRS)
Comiso, Josefino C.
2012-01-01
The key observational tools for detecting large scale changes of various parameters in the polar regions have been satellite sensors. The sensors include passive and active satellite systems in the visible, infrared and microwave frequencies. The monitoring started with Tiros and Nimbus research satellites series in the 1970s but during the period, not much data was stored digitally because of limitations and cost of the needed storage systems. Continuous global data came about starting with the launch of ocean color, passive microwave, and thermal infrared sensors on board Nimbus-7 and Synthetic Aperture Radar, Radar Altimeter and Scatterometer on board SeaSat satellite both launched in 1978. The Nimbus-7 lasted longer than expected and provided about 9 years of useful data while SeaSat quit working after 3 months but provided very useful data that became the baseline for follow-up systems with similar capabilities. Over the years, many new sensors were launched, some from Japan Aeronautics and Space Agency (JAXA), some from the European Space Agency (ESA) and more recently, from RuSSia, China, Korea, Canada and India. For polar studies, among the most useful sensors has been the passive microwave sensor which provides day/night and almost all weather observation of the surface. The sensor provide sea surface temperature, precipitation, wind, water vapor and sea ice concentration data that have been very useful in monitoring the climate of the region. More than 30 years of such data are now available, starting with the Scanning Multichannel Microwave Radiometer (SMMR) on board the Nimbus-7, the Special Scanning Microwave/Imager (SSM/I) on board a Defense Meteorological Satellite Program (DMSP) and the Advanced Microwave Scanning Radiometer on board the EOS/ Aqua satellite. The techniques that have been developed to derive geophysical parameters from data provided by these and other sensors and associated instrumental and algorithm errors and validation techniques will be discussed. An important issue is the organization and storage of hundreds of terabytes of data collected by even just a few of these satellite sensors. Advances in mass storage and computer technology have made it possible to overcome many of the collection and archival problems and the availability of comprehensive satellite data sets put together by NASA's Earth Observing System project will be discussed.
How Small Is Too Small? Technology into 2035
2010-12-01
by Arrayed Polyimide Joint Actuators,” Journal of Micromechanics and Microengineering 10, no. 3 [2000]: 337–49.) 6 A more integrated microrobot is...application-specific in- tegrated circuit used for overall control; three piezoelectric legs used for forward, reverse, and z-axis rotation move- ments...a piezoelectric touch sensor; and power storage Figure 3. Captured video image of an integrated and autonomous micro- robot. (Reproduced from Seth
Real-time high-level video understanding using data warehouse
NASA Astrophysics Data System (ADS)
Lienard, Bruno; Desurmont, Xavier; Barrie, Bertrand; Delaigle, Jean-Francois
2006-02-01
High-level Video content analysis such as video-surveillance is often limited by computational aspects of automatic image understanding, i.e. it requires huge computing resources for reasoning processes like categorization and huge amount of data to represent knowledge of objects, scenarios and other models. This article explains how to design and develop a "near real-time adaptive image datamart", used, as a decisional support system for vision algorithms, and then as a mass storage system. Using RDF specification as storing format of vision algorithms meta-data, we can optimise the data warehouse concepts for video analysis, add some processes able to adapt the current model and pre-process data to speed-up queries. In this way, when new data is sent from a sensor to the data warehouse for long term storage, using remote procedure call embedded in object-oriented interfaces to simplified queries, they are processed and in memory data-model is updated. After some processing, possible interpretations of this data can be returned back to the sensor. To demonstrate this new approach, we will present typical scenarios applied to this architecture such as people tracking and events detection in a multi-camera network. Finally we will show how this system becomes a high-semantic data container for external data-mining.
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.
NASA Astrophysics Data System (ADS)
Lynam, Jeff R.
2001-09-01
A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.
Development of high-speed video cameras
NASA Astrophysics Data System (ADS)
Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk
2001-04-01
Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.
Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam
2015-01-01
The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182
NASA Astrophysics Data System (ADS)
Dang, Van H.; Wohlgemuth, Sven; Yoshiura, Hiroshi; Nguyen, Thuc D.; Echizen, Isao
Wireless sensor network (WSN) has been one of key technologies for the future with broad applications from the military to everyday life [1,2,3,4,5]. There are two kinds of WSN model models with sensors for sensing data and a sink for receiving and processing queries from users; and models with special additional nodes capable of storing large amounts of data from sensors and processing queries from the sink. Among the latter type, a two-tiered model [6,7] has been widely adopted because of its storage and energy saving benefits for weak sensors, as proved by the advent of commercial storage node products such as Stargate [8] and RISE. However, by concentrating storage in certain nodes, this model becomes more vulnerable to attack. Our novel technique, called zip-histogram, contributes to solving the problems of previous studies [6,7] by protecting the stored data's confidentiality and integrity (including data from the sensor and queries from the sink) against attackers who might target storage nodes in two-tiered WSNs.
Support of the eight-foot high-temperature tunnel modifications project
NASA Technical Reports Server (NTRS)
Hodges, Donald Y.; Shebalin, John V.
1987-01-01
An ultrasonic level sensor was developed to measure the liquid level in a storage vessel under high pressures, namely up to 6000 psi. The sensor is described. A prototype sensor was installed in the cooling-water storage vessel of the Eight-Foot High-Temperature Tunnel. Plans are being made to install the readout instrument in the control room, so that tunnel operators can monitor the water level during the course of a tunnel run. It was discovered that the sensor will operate at cryogenic temperatures. Consequently, a sensor will be installed in the modified Eight-Foot High-Temperature Tunnel to measure the sound speed of liquid oxygen (LOX) as it is transferred from a storage vessel to the tunnel combustor at pressure of about 3000 psi. The sound speed is known to be a reliable indicator of contamination of LOX by pressurized gaseous nitrogen, which will be used to effect the transfer. Subjecting the sensor to a temperature cycle from room temperature to liquid nitrogen temperature and back again several times revealed no deterioration in sensor performance. The method using this sensor is superior to the original method, which was to bleed samples of LOX from the storage vessel to an independent chamber for measurement of the sound speed.
Considerations for blending data from various sensors
Bauer, Brian P.; Barringer, Anthony R.
1980-01-01
A project is being proposed at the EROS Data Center to blend the information from sensors aboard various satellites. The problems of, and considerations for, blending data from several satellite-borne sensors are discussed. System descriptions of the sensors aboard the HCMM, TIROS-N, GOES-D, Landsat 3, Landsat D, Seasat, SPOT, Stereosat, and NOSS satellites, and the quantity, quality, image dimensions, and availability of these data are summaries to define attributes of a multi-sensor satellite data base. Unique configurations of equipment, storage, media, and specialized hardware to meet the data system requirement are described as well as archival media and improved sensors that will be on-line within the next 5 years. Definitions and rigor required for blending various sensor data are given. Problems of merging data from the same sensor (intrasensor comparison) and from different sensors (intersensor comparison), the characteristics and advantages of cross-calibration of data, and integration of data into a product matrix field are addressed. Data processing considerations as affected by formation, resolution, and problems of merging large data sets, and organization of data bases for blending data are presented. Examples utilizing GOES and Landsat data are presented to demonstrate techniques of data blending, and recommendations for future implementation of a set of standard scenes and their characteristics necessary for optimal data blending are discussed.
Automated baseline change detection -- Phases 1 and 2. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byler, E.
1997-10-31
The primary objective of this project is to apply robotic and optical sensor technology to the operational inspection of mixed toxic and radioactive waste stored in barrels, using Automated Baseline Change Detection (ABCD), based on image subtraction. Absolute change detection is based on detecting any visible physical changes, regardless of cause, between a current inspection image of a barrel and an archived baseline image of the same barrel. Thus, in addition to rust, the ABCD system can also detect corrosion, leaks, dents, and bulges. The ABCD approach and method rely on precise camera positioning and repositioning relative to the barrelmore » and on feature recognition in images. The ABCD image processing software was installed on a robotic vehicle developed under a related DOE/FETC contract DE-AC21-92MC29112 Intelligent Mobile Sensor System (IMSS) and integrated with the electronics and software. This vehicle was designed especially to navigate in DOE Waste Storage Facilities. Initial system testing was performed at Fernald in June 1996. After some further development and more extensive integration the prototype integrated system was installed and tested at the Radioactive Waste Management Facility (RWMC) at INEEL beginning in April 1997 through the present (November 1997). The integrated system, composed of ABCD imaging software and IMSS mobility base, is called MISS EVE (Mobile Intelligent Sensor System--Environmental Validation Expert). Evaluation of the integrated system in RWMC Building 628, containing approximately 10,000 drums, demonstrated an easy to use system with the ability to properly navigate through the facility, image all the defined drums, and process the results into a report delivered to the operator on a GUI interface and on hard copy. Further work is needed to make the brassboard system more operationally robust.« less
MPCM: a hardware coder for super slow motion video sequences
NASA Astrophysics Data System (ADS)
Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.
2013-12-01
In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.
A Wide Dynamic Range Tapped Linear Array Image Sensor
NASA Astrophysics Data System (ADS)
Washkurak, William D.; Chamberlain, Savvas G.; Prince, N. Daryl
1988-08-01
Detectors for acousto-optic signal processing applications require fast transient response as well as wide dynamic range. There are two major choices of detectors: conductive or integration mode. Conductive mode detectors have an initial transient period before they reach then' i equilibrium state. The duration of 1 his period is dependent on light level as well as detector capacitance. At low light levels a conductive mode detector is very slow; response time is typically on the order of milliseconds. Generally. to obtain fast transient response an integrating mode detector is preferred. With integrating mode detectors. the dynamic range is determined by the charge storage capability of the tran-sport shift registers and the noise level of the image sensor. The conventional net hod used to improve dynamic range is to increase the shift register charge storage capability. To achieve a dynamic range of fifty thousand assuming two hundred noise equivalent electrons, a charge storage capability of ten million electrons would be required. In order to accommodate this amount of charge. unrealistic shift registers widths would be required. Therefore, with an integrating mode detector it is difficult to achieve a dynamic range of over four orders of magnitude of input light intensity. Another alternative is to solve the problem at the photodetector aml not the shift, register. DALSA's wide dynamic range detector utilizes an optimized, ion implant doped, profiled MOSFET photodetector specifically designed for wide dynamic range. When this new detector operates at high speed and at low light levels the photons are collected and stored in an integrating fashion. However. at bright light levels where transient periods are short, the detector switches into a conductive mode. The light intensity is logarithmically compressed into small charge packets, easily carried by the CCD shift register. As a result of the logarithmic conversion, dynamic ranges of over six orders of magnitide are obtained. To achieve the short integration times necessary in acousto-optic applications. t he wide dynamic range detector has been implemented into a tapped array architecture with eight outputs and 256 photoelements. Operation of each 01)1,1)111 at 16 MHz yields detector integration times of 2 micro-seconds. Buried channel two phase CCD shift register technology is utilized to minimize image sensor noise improve video output rates and increase ease of operation.
Secure chaotic map based block cryptosystem with application to camera sensor networks.
Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled
2011-01-01
Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network.
Secure Chaotic Map Based Block Cryptosystem with Application to Camera Sensor Networks
Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled
2011-01-01
Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network. PMID:22319371
Sensors-network and its application in the intelligent storage security
NASA Astrophysics Data System (ADS)
Zhang, Qingying; Nicolescu, Mihai; Jiang, Xia; Zhang, Ying; Yue, Weihong; Xiao, Weihong
2004-11-01
Intelligent storage systems run on different advanced technologies, such as linear layout, business intelligence and data mining. Security, the basic desire of the storage system, has been focused on with the indraught of multimedia communication technology and sensors" network. Along with the developing of science and the social demands, multifarious alarming system has been designed and improved to be intelligentized, modularized and have network connections. It is of great moment to make the storage, and further more, the logistics system more and more efficient and perfect with modern science and technology. Diversified information on the spot should be caught by different kinds of sensors. Those signals are treated and communicated to the control center to give the further actions. For fire-proofing, broad-spectrum gas sensors, fume sensors, flame sensors and temperature sensors are used to catch the information in their own ways. Once the fire is taken somewhere, the sensors work by the fume, temperature, and flame as well as gas immediately. Meanwhile the intelligent control system starts. It passes the tidings to the center unit. At the same time, it sets those movable walls on to work quickly to obstruct the fire"s spreading. While for guarding the warehouse against theft, cut-off sensors, body sensors, photoelectric sensors, microwave sensors and closed-circuit television as well as electronic clocks are available to monitor the warehouse reasonably. All of those sensors work in a net way. The intelligent control system is made with a digital circuit instead of traditional switch one. This system can work in a better way in many cases. Its reliability is high and the cost is low.
Two terminal micropower radar sensor
McEwan, Thomas E.
1995-01-01
A simple, low power ultra-wideband radar motion sensor/switch configuration connects a power source and load to ground. The switch is connected to and controlled by the signal output of a radar motion sensor. The power input of the motion sensor is connected to the load through a diode which conducts power to the motion sensor when the switch is open. A storage capacitor or rechargeable battery is connected to the power input of the motion sensor. The storage capacitor or battery is charged when the switch is open and powers the motion sensor when the switch is closed. The motion sensor and switch are connected between the same two terminals between the source/load and ground.
Two terminal micropower radar sensor
McEwan, T.E.
1995-11-07
A simple, low power ultra-wideband radar motion sensor/switch configuration connects a power source and load to ground. The switch is connected to and controlled by the signal output of a radar motion sensor. The power input of the motion sensor is connected to the load through a diode which conducts power to the motion sensor when the switch is open. A storage capacitor or rechargeable battery is connected to the power input of the motion sensor. The storage capacitor or battery is charged when the switch is open and powers the motion sensor when the switch is closed. The motion sensor and switch are connected between the same two terminals between the source/load and ground. 3 figs.
Design, optimization and evaluation of a "smart" pixel sensor array for low-dose digital radiography
NASA Astrophysics Data System (ADS)
Wang, Kai; Liu, Xinghui; Ou, Hai; Chen, Jun
2016-04-01
Amorphous silicon (a-Si:H) thin-film transistors (TFTs) have been widely used to build flat-panel X-ray detectors for digital radiography (DR). As the demand for low-dose X-ray imaging grows, a detector with high signal-to-noise-ratio (SNR) pixel architecture emerges. "Smart" pixel is intended to use a dual-gate photosensitive TFT for sensing, storage, and switch. It differs from a conventional passive pixel sensor (PPS) and active pixel sensor (APS) in that all these three functions are combined into one device instead of three separate units in a pixel. Thus, it is expected to have high fill factor and high spatial resolution. In addition, it utilizes the amplification effect of the dual-gate photosensitive TFT to form a one-transistor APS that leads to a potentially high SNR. This paper addresses the design, optimization and evaluation of the smart pixel sensor and array for low-dose DR. We will design and optimize the smart pixel from the scintillator to TFT levels and validate it through optical and electrical simulation and experiments of a 4x4 sensor array.
Nanosecond-laser induced crosstalk of CMOS image sensor
NASA Astrophysics Data System (ADS)
Zhu, Rongzhen; Wang, Yanbin; Chen, Qianrong; Zhou, Xuanfeng; Ren, Guangsen; Cui, Longfei; Li, Hua; Hao, Daoliang
2018-02-01
The CMOS Image Sensor (CIS) is photoelectricity image device which focused the photosensitive array, amplifier, A/D transfer, storage, DSP, computer interface circuit on the same silicon substrate[1]. It has low power consumption, high integration,low cost etc. With large scale integrated circuit technology progress, the noise suppression level of CIS is enhanced unceasingly, and its image quality is getting better and better. It has been in the security monitoring, biometrice, detection and imaging and even military reconnaissance and other field is widely used. CIS is easily disturbed and damaged while it is irradiated by laser. It is of great significance to study the effect of laser irradiation on optoelectronic countermeasure and device for the laser strengthening resistance is of great significance. There are some researchers have studied the laser induced disturbed and damaged of CIS. They focused on the saturation, supersaturated effects, and they observed different effects as for unsaturation, saturation, supersaturated, allsaturated and pixel flip etc. This paper research 1064nm laser interference effect in a typical before type CMOS, and observring the saturated crosstalk and half the crosstalk line. This paper extracted from cmos devices working principle and signal detection methods such as the Angle of the formation mechanism of the crosstalk line phenomenon are analyzed.
NASA Astrophysics Data System (ADS)
Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi
2001-04-01
We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.
NASA Astrophysics Data System (ADS)
Hall-Brown, Mary
The heterogeneity of Arctic vegetation can make land cover classification vey difficult when using medium to small resolution imagery (Schneider et al., 2009; Muller et al., 1999). Using high radiometric and spatial resolution imagery, such as the SPOT 5 and IKONOS satellites, have helped arctic land cover classification accuracies rise into the 80 and 90 percentiles (Allard, 2003; Stine et al., 2010; Muller et al., 1999). However, those increases usually come at a high price. High resolution imagery is very expensive and can often add tens of thousands of dollars onto the cost of the research. The EO-1 satellite launched in 2002 carries two sensors that have high specral and/or high spatial resolutions and can be an acceptable compromise between the resolution versus cost issues. The Hyperion is a hyperspectral sensor with the capability of collecting 242 spectral bands of information. The Advanced Land Imager (ALI) is an advanced multispectral sensor whose spatial resolution can be sharpened to 10 meters. This dissertation compares the accuracies of arctic land cover classifications produced by the Hyperion and ALI sensors to the classification accuracies produced by the Systeme Pour l' Observation de le Terre (SPOT), the Landsat Thematic Mapper (TM) and the Landsat Enhanced Thematic Mapper Plus (ETM+) sensors. Hyperion and ALI images from August 2004 were collected over the Upper Kuparuk River Basin, Alaska. Image processing included the stepwise discriminant analysis of pixels that were positively classified from coinciding ground control points, geometric and radiometric correction, and principle component analysis. Finally, stratified random sampling was used to perform accuracy assessments on satellite derived land cover classifications. Accuracy was estimated from an error matrix (confusion matrix) that provided the overall, producer's and user's accuracies. This research found that while the Hyperion sensor produced classfication accuracies that were equivalent to the TM and ETM+ sensor (approximately 78%), the Hyperion could not obtain the accuracy of the SPOT 5 HRV sensor. However, the land cover classifications derived from the ALI sensor exceeded most classification accuracies derived from the TM and ETM+ senors and were even comparable to most SPOT 5 HRV classifications (87%). With the deactivation of the Landsat series satellites, the monitoring of remote locations such as in the Arctic on an uninterupted basis thoughout the world is in jeopardy. The utilization of the Hyperion and ALI sensors are a way to keep that endeavor operational. By keeping the ALI sensor active at all times, uninterupted observation of the entire Earth can be accomplished. Keeping the Hyperion sensor as a "tasked" sensor can provide scientists with additional imagery and options for their studies without overburdening storage issues.
Development of on package indicator sensor for real-time monitoring of meat quality
Shukla, Vivek; Kandeepan, G.; Vishnuraj, M. R.
2015-01-01
Aim: The aim was to develop an indicator sensor for real-time monitoring of meat quality and to compare the response of indicator sensor with meat quality parameters at ambient temperature. Materials and Methods: Indicator sensor was prepared using bromophenol blue (1% w/v) as indicator solution and filter paper as indicator carrier. Indicator sensor was fabricated by coating indicator solution onto carrier by centrifugation. To observe the response of indicator sensor buffalo meat was packed in polystyrene foam trays covered with PVC film and indicator sensor was attached to the inner side of packaging film. The pattern of color change in indicator sensor was monitored and compared with meat quality parameters viz. total volatile basic nitrogen, D-glucose, standard plate count and tyrosine value to correlate ability of indicator sensor for its suitability to predict the meat quality and storage life. Results: The indicator sensor changed its color from yellow to blue starting from margins during the storage period of 24 h at ambient temperature and this correlated well with changes in meat quality parameters. Conclusions: The indicator sensor can be used for real-time monitoring of meat quality as the color of indicator sensor changed from yellow to blue starting from margins when meat deteriorates with advancement of the storage period. Thus by observing the color of indicator sensor quality of meat and shelf life can be predicted. PMID:27047103
Photorealistic scene presentation: virtual video camera
NASA Astrophysics Data System (ADS)
Johnson, Michael J.; Rogers, Joel Clark W.
1994-07-01
This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.
Solar micro-power system for self-powered wireless sensor nodes
NASA Astrophysics Data System (ADS)
He, Yongtai; Li, Yangqiu; Liu, Lihui; Wang, Lei
2008-10-01
In self-powered wireless sensor nodes, the efficiency for environmental energy harvesting, storage and management determines the lifetime and environmental adaptability of the sensor nodes. However, the method of improving output efficiency for traditional photovoltaic power generation is not suitable for a solar micro-power system due to the special requirements for its application. This paper presents a solar micro-power system designed for a solar self-powered wireless sensor node. The Maximum Power Point Tracking (MPPT) of solar cells and energy storage are realized by the hybrid energy storage structure and "window" control. Meanwhile, the mathematical model of energy harvesting, storing and management is formulated. In the novel system, the output conversion efficiency of solar cells is 12%.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-09-18
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.
Integration of Grid and Sensor Web for Flood Monitoring and Risk Assessment from Heterogeneous Data
NASA Astrophysics Data System (ADS)
Kussul, Nataliia; Skakun, Sergii; Shelestov, Andrii
2013-04-01
Over last decades we have witnessed the upward global trend in natural disaster occurrence. Hydrological and meteorological disasters such as floods are the main contributors to this pattern. In recent years flood management has shifted from protection against floods to managing the risks of floods (the European Flood risk directive). In order to enable operational flood monitoring and assessment of flood risk, it is required to provide an infrastructure with standardized interfaces and services. Grid and Sensor Web can meet these requirements. In this paper we present a general approach to flood monitoring and risk assessment based on heterogeneous geospatial data acquired from multiple sources. To enable operational flood risk assessment integration of Grid and Sensor Web approaches is proposed [1]. Grid represents a distributed environment that integrates heterogeneous computing and storage resources administrated by multiple organizations. SensorWeb is an emerging paradigm for integrating heterogeneous satellite and in situ sensors and data systems into a common informational infrastructure that produces products on demand. The basic Sensor Web functionality includes sensor discovery, triggering events by observed or predicted conditions, remote data access and processing capabilities to generate and deliver data products. Sensor Web is governed by the set of standards, called Sensor Web Enablement (SWE), developed by the Open Geospatial Consortium (OGC). Different practical issues regarding integration of Sensor Web with Grids are discussed in the study. We show how the Sensor Web can benefit from using Grids and vice versa. For example, Sensor Web services such as SOS, SPS and SAS can benefit from the integration with the Grid platform like Globus Toolkit. The proposed approach is implemented within the Sensor Web framework for flood monitoring and risk assessment, and a case-study of exploiting this framework, namely the Namibia SensorWeb Pilot Project, is described. The project was created as a testbed for evaluating and prototyping key technologies for rapid acquisition and distribution of data products for decision support systems to monitor floods and enable flood risk assessment. The system provides access to real-time products on rainfall estimates and flood potential forecast derived from the Tropical Rainfall Measuring Mission (TRMM) mission with lag time of 6 h, alerts from the Global Disaster Alert and Coordination System (GDACS) with lag time of 4 h, and the Coupled Routing and Excess STorage (CREST) model to generate alerts. These are alerts are used to trigger satellite observations. With deployed SPS service for NASA's EO-1 satellite it is possible to automatically task sensor with re-image capability of less 8 h. Therefore, with enabled computational and storage services provided by Grid and cloud infrastructure it was possible to generate flood maps within 24-48 h after trigger was alerted. To enable interoperability between system components and services OGC-compliant standards are utilized. [1] Hluchy L., Kussul N., Shelestov A., Skakun S., Kravchenko O., Gripich Y., Kopp P., Lupian E., "The Data Fusion Grid Infrastructure: Project Objectives and Achievements," Computing and Informatics, 2010, vol. 29, no. 2, pp. 319-334.
NASA Astrophysics Data System (ADS)
Giordano, N.; Arato, A.; Comina, C.; Mandrone, G.
2017-05-01
A Borehole Thermal Energy Storage living lab was built up nearby Torino (Northern Italy). This living lab aims at testing the ability of the alluvial deposits of the north-western Po Plain to store the thermal energy collected by solar thermal panels and the efficiency of energy storage systems in this climatic context. Different monitoring approaches have been tested and analyzed since the start of the thermal injection in April 2014. Underground temperature monitoring is constantly undertaken by means of several temperature sensors located along the borehole heat exchangers and within the hydraulic circuit. Nevertheless, this can provide only pointwise information about underground temperature distribution. For this reason, a geophysical approach is proposed in order to image the thermally affected zone (TAZ) caused by the heat injection: surface electrical resistivity measurements were carried out with this purpose. In the present paper, results of time-lapse acquisitions during a heating day are reported with the aim of imaging the thermal plume evolution within the subsoil. Resistivity data, calibrated on local temperature measurements, have shown their potentiality in imaging the heated plume of the system and depicting its evolution throughout the day. Different types of data processing were adopted in order to face issues mainly related to a highly urbanized environment. The use of apparent resistivity proved to be in valid agreement with the results of different inversion approaches. The inversion processes did not significantly improve the qualitative and quantitative TAZ imaging in comparison to the pseudo-sections. This suggested the usefulness of apparent resistivity data alone for a rough monitoring of TAZ in this kind of applications.
Quality Assurance By Laser Scanning And Imaging Techniques
NASA Astrophysics Data System (ADS)
SchmalfuB, Harald J.; Schinner, Karl Ludwig
1989-03-01
Laser scanning systems are well established in the world of fast industrial in-process quality inspection systems. The materials inspected by laser scanning systems are e.g. "endless" sheets of steel, paper, textile, film or foils. The web width varies from 50 mm up to 5000 mm or more. The web speed depends strongly on the production process and can reach several hundred meters per minute. The continuous data flow in one of different channels of the optical receiving system exceeds ten Megapixels/sec. Therefore it is clear that the electronic evaluation system has to process these data streams in real time and no image storage is possible. But sometimes (e.g. first installation of the system, change of the defect classification) it would be very helpful to have the possibility for a visual look on the original, i.e. not processed sensor data. At first we show the principle set up of a standard laser scanning system. Then we will introduce a large image memory especially designed for the needs of high-speed inspection sensors. This image memory co-operates with the standard on-line evaluation electronics and provides therefore an easy comparison between processed and non-processed data. We will discuss the basic system structure and we will show the first industrial results.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-01-01
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications. PMID:26393596
MEASUREMENT AND ANALYSIS OF VAPOR SENSORS USED AT UNDERGROUND STORAGE TANK SITES
This report is a continuation of an investigation to quantify the operating characteristics of vapor sensor technologies used at underground storage tank (UST) sites. n the previous study (EPA/600/R-92/219) the sensitivity, selectivity, and response time to simulated UST environm...
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Wang, W.; Ganguly, S.; Li, S.; Michaelis, A.; Higuchi, A.; Takenaka, H.; Nemani, R. R.
2017-12-01
New geostationary sensors such as the AHI (Advanced Himawari Imager on Himawari-8) and the ABI (Advanced Baseline Imager on GOES-16) have the potential to advance ecosystem modeling particularly of diurnally varying phenomenon through frequent observations. These sensors have similar channels as in MODIS (MODerate resolution Imaging Spectroradiometer), and allow us to utilize the knowledge and experience in MODIS data processing. Here, we developed sub-hourly Gross Primary Production (GPP) algorithm, leverating the MODIS 17 GPP algorithm. We run the model at 1-km resolution over Japan and Australia using geo-corrected AHI data. Solar radiation was directly calculated from AHI using a neural network technique. The other necessary climate data were derived from weather stations and other satellite data. The sub-hourly estimates of GPP were first compared with ground-measured GPP at various Fluxnet sites. We also compared the AHI GPP with MODIS 17 GPP, and analyzed the differences in spatial patterns and the effect of diurnal changes in climate forcing. The sub-hourly GPP products require massive storage and strong computational power. We use NEX (NASA Earth Exchange) facility to produce the GPP products. This GPP algorithm can be applied to other geostationary satellites including GOES-16 in future.
CMOS-Compatible Room-Temperature Rectifier Toward Terahertz Radiation Detection
NASA Astrophysics Data System (ADS)
Varlamava, Volha; De Amicis, Giovanni; Del Monte, Andrea; Perticaroli, Stefano; Rao, Rosario; Palma, Fabrizio
2016-08-01
In this paper, we present a new rectifying device, compatible with the technology of CMOS image sensors, suitable for implementing a direct-conversion detector operating at room temperature for operation at up to terahertz frequencies. The rectifying device can be obtained by introducing some simple modifications of the charge-storage well in conventional CMOS integrated circuits, making the proposed solution easy to integrate with the existing imaging systems. The rectifying device is combined with the different elements of the detector, composed of a 3D high-performance antenna and a charge-storage well. In particular, its position just below the edge of the 3D antenna takes maximum advantage of the high electric field concentrated by the antenna itself. In addition, the proposed structure ensures the integrity of the charge-storage well of the detector. In the structure, it is not necessary to use very scaled and costly technological nodes, since the CMOS transistor only provides the necessary integrated readout electronics. On-wafer measurements of RF characteristics of the designed junction are reported and discussed. The overall performances of the entire detector in terms of noise equivalent power (NEP) are evaluated by combining low-frequency measurements of the rectifier with numerical simulations of the 3D antenna and the semiconductor structure at 1 THz, allowing prediction of the achievable NEP.
Vandenberghe, Bart; Corpas, Livia; Bosmans, Hilde; Yang, Jie; Jacobs, Reinhilde
2011-08-01
The aim of this study was the determination of image accuracy and quality for periodontal diagnosis using various X-ray generators with conventional and digital radiographs. Thirty-one in vitro periodontal defects were evaluated on intraoral conventional (E-, F/E-speed) and digital images (three indirect, two direct sensors). Standardised radiographs were made with an alternating current (AC), a high-frequency (HF) and a direct current (DC) X-ray unit at rising exposure times (20-160 ms with 20-ms interval) with a constant kV of 70. Three observers assessed bone levels for comparison to the gold standard. Lamina dura, contrast, trabecularisation, crater and furcation involvements were evaluated. Irrespective X-ray generator-type, measurement deviations increased at higher exposure times for solid-state, but decreased for photostimulable storage phosphor (PSP) systems. Accuracy for HF or DC was significantly higher than AC (p < 0.0001), especially at low exposure times. At 0.5- to 1-mm clinical deviation, 27-53% and 32-55% dose savings were demonstrated when using HF or DC generators compared to AC, but only for PSP. No savings were found for solid-state sensors, indicating their higher sensitivity. The use of digital sensors compared to film allowed 15-90% dose savings using the AC tube, whilst solid-state sensors allowed approximately 50% savings compared to PSP, depending on tube type and threshold level.. Accuracy of periodontal diagnosis increases when using HF or DC generators and/or digital receptors with adequate diagnostic information at lower exposure times.
ROI-Based On-Board Compression for Hyperspectral Remote Sensing Images on GPU.
Giordano, Rossella; Guccione, Pietro
2017-05-19
In recent years, hyperspectral sensors for Earth remote sensing have become very popular. Such systems are able to provide the user with images having both spectral and spatial information. The current hyperspectral spaceborne sensors are able to capture large areas with increased spatial and spectral resolution. For this reason, the volume of acquired data needs to be reduced on board in order to avoid a low orbital duty cycle due to limited storage space. Recently, literature has focused the attention on efficient ways for on-board data compression. This topic is a challenging task due to the difficult environment (outer space) and due to the limited time, power and computing resources. Often, the hardware properties of Graphic Processing Units (GPU) have been adopted to reduce the processing time using parallel computing. The current work proposes a framework for on-board operation on a GPU, using NVIDIA's CUDA (Compute Unified Device Architecture) architecture. The algorithm aims at performing on-board compression using the target's related strategy. In detail, the main operations are: the automatic recognition of land cover types or detection of events in near real time in regions of interest (this is a user related choice) with an unsupervised classifier; the compression of specific regions with space-variant different bit rates including Principal Component Analysis (PCA), wavelet and arithmetic coding; and data volume management to the Ground Station. Experiments are provided using a real dataset taken from an AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) airborne sensor in a harbor area.
Development of a CO 2 Chemical Sensor for Downhole CO 2 Monitoring in Carbon Sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ning
Geologic storage of carbon dioxide (CO 2) has been proposed as a viable means for reducing anthropogenic CO 2 emissions. The means for geological sequestration of CO 2 is injection of supercritical CO 2 underground, which requires the CO 2 to remain either supercritical, or in solution in the water/brine present in the underground formation. However, there are aspects of geologic sequestration that need further study, particularly in regards to safety. To date, none of the geologic sequestration locations have been tested for storage integrity under the changing stress conditions that apply to the sequestration of very large amounts ofmore » CO 2. Establishing environmental safety and addressing public concerns require widespread monitoring of the process in the deep subsurface. In addition, studies of subsurface carbon sequestration such as flow simulations, models of underground reactions and transports require a comprehensive monitoring process to accurately characterize and understand the storage process. Real-time information about underground CO 2 movement and concentration change is highly helpful for: (1) better understanding the uncertainties present in CO 2 geologic storage; (2) improvement of simulation models; and (3) evaluation of the feasibility of geologic CO 2 storage. Current methods to monitor underground CO 2 storage include seismic, geoelectric, isotope and tracer methods, and fluid sampling analysis. However, these methods commonly resulted low resolution, high cost, and the inability to monitor continuously over the long time scales of the CO 2 storage process. A preferred way of monitoring in-situ underground CO 2 migration is to continuous measure CO 2 concentration change in brine during the carbon storage process. An approach to obtain the real time information on CO 2 concentration change in formation solution is highly demanded in carbon storage to understand the CO 2 migration subsurface and to answer the public safety problem. The objective of the study is to develop a downhole CO 2 sensor that can in-situ, continuously monitor CO 2 concentration change in deep saline. The sensor is a Severinghaus-type CO 2 sensor with small size, which renders it can be embedded in monitoring well casing or integrated with pressure/temperature transducers, enabling the development of “smart” wells. The studies included: (1) prepare and characterize metal-oxide electrodes. Test the electrodes response to pH change. Investigate different ions and brine concentration effects on the electrode’s performance. Study the stability of the electrode in brine solution; (2) fabricate a downhole CO 2 sensor with the metal-oxide electrodes prepared in the laboratory. Test the performance of the CO 2 sensor in brine solutions. Study high pressure effects on the performance of the sensor; (3) design and conduct CO 2/brine coreflooding experiments with the CO2 sensor. Monitor CO 2 movement along the core and test the performance of the sensor in coreflooding tests. Develop a data acquisition system that can digitize the sensor’s output voltage. Our completed research has resulted in deep understanding of downhole CO 2 sensor development and CO 2 monitoring in CO 2 storage process. The developed downhole CO 2 sensor included a metal-oxide electrode, a gas-permeable membrane, a porous steel cup, and a bicarbonate-based internal electrolyte solution. Iridium oxide-based electrode was prepared and used for preparation the CO 2 sensor. The prepared iridium oxide-based electrode displayed a linearly response to pH change. Different factors such as different ions and ions concentration, temperature, and pressure effects on the electrode performance on pH response were investigated. The results indicated that the electrode exhibited a good performance even in high salt concentration of produced water. To improve the electrode performance under high pressure, IrO 2 nanoparticles with the particle size in the range of 1-2 nm were prepared and electrodeposited on stainless steel substrate by cyclic voltammetry. It was observed that the thin film of iridium oxide was formed on the substrate surface and such iridium oxide-based electrode displayed excellent performance under high pressure for longer term. A downhole CO 2 sensor with the iridium oxide-based electrode was prepared. The working principle of the CO 2 sensor is based on the measurement of the pH change of the internal electrolyte solution caused by the hydrolysis of CO 2 and then determination of the CO 2 concentration in water. The prepared downhole CO 2 sensor had the size of diameter of 0.7 in. and length of 1.5 in. The sensor was tested under the pressures of 500 psi, 2,000 psi, and 3,000 psi. A linear correlation was observed between the sensor potential change and dissolved CO 2 concentration in water. The response time of the CO 2 sensor was in the range of 60-100 minutes. Further tests indicated that the CO 2 sensor exhibited good reproducibility under high pressure. A CO 2/brine coreflooding system was constructed to simulate the real-world CO 2 storage process. The prepared downhole CO 2 sensor was loaded in the system to monitor CO 2 movement during CO 2/brine coreflooding test. The results indicated that the sensor could detect CO 2 movement in the tests. Further studies showed that the sensor could be recovered by brine flooding after CO 2/brine flushed the core. The results of the coreflooding tests demonstrated that the sensor had potential application for CO 2 monitoring in carbon sequestration. A data acquisition system for the downhoe CO 2 sensor was developed and coded. The system converted the sensor output signal into digital data and transported the data from downhole to wellhead surface. The data acquisition system was tested and evaluated in the laboratory with the prepared sensor for data collection.« less
A new 9T global shutter pixel with CDS technique
NASA Astrophysics Data System (ADS)
Liu, Yang; Ma, Cheng; Zhou, Quan; Wang, Xinyang
2015-04-01
Benefiting from motion blur free, Global shutter pixel is very widely used in the design of CMOS image sensors for high speed applications such as motion vision, scientifically inspection, etc. In global shutter sensors, all pixel signal information needs to be stored in the pixel first and then waiting for readout. For higher frame rate, we need very fast operation of the pixel array. There are basically two ways for the in pixel signal storage, one is in charge domain, such as the one shown in [1], this needs complicated process during the pixel fabrication. The other one is in voltage domain, one example is the one in [2], this pixel is based on the 4T PPD technology and normally the driving of the high capacitive transfer gate limits the speed of the array operation. In this paper we report a new 9T global shutter pixel based on 3-T partially pinned photodiode (PPPD) technology. It incorporates three in-pixel storage capacitors allowing for correlated double sampling (CDS) and pipeline operation of the array (pixel exposure during the readout of the array). Only two control pulses are needed for all the pixels at the end of exposure which allows high speed exposure control.
Note: Device for obtaining volumetric, three-component velocity fields inside cylindrical cavities.
Ramírez, G; Núñez, J; Hernández, G N; Hernández-Cruz, G; Ramos, E
2015-11-01
We describe a device designed and built to obtain the three-component, steady state velocity field in the whole volume occupied by a fluid in motion contained in a cavity with cylindrical walls. The prototype comprises a two-camera stereoscopic particle image velocimetry system mounted on a platform that rotates around the volume under analysis and a slip ring arrangement that transmits data from the rotating sensors to the data storage elements. Sample observations are presented for natural convection in a cylindrical container but other flows can be analyzed.
NASA Astrophysics Data System (ADS)
Thangaraj, K.; Elefsiniotis, A.; Aslam, S.; Becker, Th.; Schmid, U.; Lees, J.; Featherston, C. A.; Pullin, R.
2013-05-01
This paper describes an approach for efficiently storing the energy harvested from a thermoelectric module for powering autonomous wireless sensor nodes for aeronautical health monitoring applications. A representative temperature difference was created across a thermo electric generator (TEG) by attaching a thermal mass and a cavity containing a phase change material to one side, and a heat source (to represent the aircraft fuselage) to the other. Batteries and supercapacitors are popular choices of storage device, but neither represents the ideal solution; supercapacitors have a lower energy density than batteries and batteries have lower power density than supercapacitors. When using only a battery for storage, the runtime of a typical sensor node is typically reduced by internal impedance, high resistance and other internal losses. Supercapacitors may overcome some of these problems, but generally do not provide sufficient long-term energy to allow advanced health monitoring applications to operate over extended periods. A hybrid energy storage unit can provide both energy and power density to the wireless sensor node simultaneously. Techniques such as acoustic-ultrasonic, acoustic-emission, strain, crack wire sensor and window wireless shading require storage approaches that can provide immediate energy on demand, usually in short, high intensity bursts, and that can be sustained over long periods of time. This application requirement is considered as a significant constraint when working with battery-only and supercapacitor-only solutions and they should be able to store up-to 40-50J of energy.
An object-based storage model for distributed remote sensing images
NASA Astrophysics Data System (ADS)
Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng
2006-10-01
It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.
Modular nonvolatile solid state recorder (MONSSTR) update
NASA Astrophysics Data System (ADS)
Klang, Mark R.; Small, Martin B.; Beams, Tom
2001-12-01
Solid state recorders have begun replacing traditional tape recorders in fulfilling the requirement to record images on airborne platforms. With the advances in electro-optical, IR, SAR, Multi and Hyper-spectral sensors and video recording requirements, solid state recorders have become the recorder of choice. Solid state recorders provide the additional storage, higher sustained bandwidth, less power, less weight and smaller footprint to meet the current and future recording requirements. CALCULEX, Inc., manufactures a non-volatile flash memory solid state recorder called the MONSSTR (Modular Non-volatile Solid State Recorder). MONSSTR is being used to record images from many different digital sensors on high performance aircraft such as the RF- 4, F-16 and the Royal Air Force Tornado. MONSSTR, with its internal multiplexer, is also used to record instrumentation data. This includes multiple streams of PCM and multiple channels of 1553 data. Instrumentation data is being recorded by MONSSTR systems in a range of platforms including F-22, F-15, F-16, Comanche Helicopter and US Navy torpedos. MONSSTR can also be used as a cockpit video recorder. This paper will provide an update of the MONSSTR.
a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images
NASA Astrophysics Data System (ADS)
Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.
2015-07-01
Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.
NASA Astrophysics Data System (ADS)
Cowell, Martin Andrew
The world already hosts more internet connected devices than people, and that ratio is only increasing. These devices seamlessly integrate with peoples lives to collect rich data and give immediate feedback about complex systems from business, health care, transportation, and security. As every aspect of global economies integrate distributed computing into their industrial systems and these systems benefit from rich datasets. Managing the power demands of these distributed computers will be paramount to ensure the continued operation of these networks, and is elegantly addressed by including local energy harvesting and storage on a per-node basis. By replacing non-rechargeable batteries with energy harvesting, wireless sensor nodes will increase their lifetimes by an order of magnitude. This work investigates the coupling of high power energy storage with energy harvesting technologies to power wireless sensor nodes; with sections covering device manufacturing, system integration, and mathematical modeling. First we consider the energy storage mechanism of supercapacitors and batteries, and identify favorable characteristics in both reservoir types. We then discuss experimental methods used to manufacture high power supercapacitors in our labs. We go on to detail the integration of our fabricated devices with collaborating labs to create functional sensor node demonstrations. With the practical knowledge gained through in-lab manufacturing and system integration, we build mathematical models to aid in device and system design. First, we model the mechanism of energy storage in porous graphene supercapacitors to aid in component architecture optimization. We then model the operation of entire sensor nodes for the purpose of optimally sizing the energy harvesting and energy reservoir components. In consideration of deploying these sensor nodes in real-world environments, we model the operation of our energy harvesting and power management systems subject to spatially and temporally varying energy availability in order to understand sensor node reliability. Looking to the future, we see an opportunity for further research to implement machine learning algorithms to control the energy resources of distributed computing networks.
Xing, Yage; Xu, Qinglian; Yang, Simon X.; Chen, Cunkun; Tang, Yong; Sun, Shumin; Zhang, Liang; Che, Zhenming; Li, Xihong
2016-01-01
The chitosan-based coating with antimicrobial agent has been developed recently to control the decay of fruits. However, its fresh keeping and antimicrobial mechanism is still not very clear. The preservation mechanism of chitosan coating with cinnamon oil for fruits storage is investigated in this paper. Results in the atomic force microscopy sensor images show that many micropores exist in the chitosan coating film. The roughness of coating film is affected by the concentration of chitosan. The antifungal activity of cinnamon oil should be mainly due to its main consistent trans-cinnamaldehyde, which is proportional to the trans-cinnamaldehyde concentration and improves with increasing the attachment time of oil. The exosmosis ratios of Penicillium citrinum and Aspergillus flavus could be enhanced by increasing the concentration of cinnamon oil. Morphological observation indicates that, compared to the normal cell, the wizened mycelium of A. flavus is observed around the inhibition zone, and the growth of spores is also inhibited. Moreover, the analysis of gas sensors indicate that the chitosan-oil coating could decrease the level of O2 and increase the level of CO2 in the package of cherry fruits, which also control the fruit decay. These results indicate that its preservation mechanism might be partly due to the micropores structure of coating film as a barrier for gas and a carrier for oil, and partly due to the activity of cinnamon oil on the cell disruption. PMID:27438841
Xing, Yage; Xu, Qinglian; Yang, Simon X; Chen, Cunkun; Tang, Yong; Sun, Shumin; Zhang, Liang; Che, Zhenming; Li, Xihong
2016-07-18
The chitosan-based coating with antimicrobial agent has been developed recently to control the decay of fruits. However, its fresh keeping and antimicrobial mechanism is still not very clear. The preservation mechanism of chitosan coating with cinnamon oil for fruits storage is investigated in this paper. Results in the atomic force microscopy sensor images show that many micropores exist in the chitosan coating film. The roughness of coating film is affected by the concentration of chitosan. The antifungal activity of cinnamon oil should be mainly due to its main consistent trans-cinnamaldehyde, which is proportional to the trans-cinnamaldehyde concentration and improves with increasing the attachment time of oil. The exosmosis ratios of Penicillium citrinum and Aspergillus flavus could be enhanced by increasing the concentration of cinnamon oil. Morphological observation indicates that, compared to the normal cell, the wizened mycelium of A. flavus is observed around the inhibition zone, and the growth of spores is also inhibited. Moreover, the analysis of gas sensors indicate that the chitosan-oil coating could decrease the level of O₂ and increase the level of CO₂ in the package of cherry fruits, which also control the fruit decay. These results indicate that its preservation mechanism might be partly due to the micropores structure of coating film as a barrier for gas and a carrier for oil, and partly due to the activity of cinnamon oil on the cell disruption.
NASA Astrophysics Data System (ADS)
O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.
2013-04-01
Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Direct-Solve Image-Based Wavefront Sensing; Use of UV Sources for Detection and Identification of Explosives; Using Fluorescent Viruses for Detecting Bacteria in Water; Gradiometer Using Middle Loops as Sensing Elements in a Low-Field SQUID MRI System; Volcano Monitor: Autonomous Triggering of In-Situ Sensors; Wireless Fluid-Level Sensors for Harsh Environments; Interference-Detection Module in a Digital Radar Receiver; Modal Vibration Analysis of Large Castings; Structural/Radiation-Shielding Epoxies; Integrated Multilayer Insulation; Apparatus for Screening Multiple Oxygen-Reduction Catalysts; Determining Aliasing in Isolated Signal Conditioning Modules; Composite Bipolar Plate for Unitized Fuel Cell/Electrolyzer Systems; Spectrum Analyzers Incorporating Tunable WGM Resonators; Quantum-Well Thermophotovoltaic Cells; Bounded-Angle Iterative Decoding of LDPC Codes; Conversion from Tree to Graph Representation of Requirements; Parallel Hybrid Vehicle Optimal Storage System; and Anaerobic Digestion in a Flooded Densified Leachbed.
Qian, Fang; Zhang, Changli; Zhang, Yumin; He, Weijiang; Gao, Xiang; Hu, Ping; Guo, Zijian
2009-02-04
The UV- and sensor-induced interferences to living systems pose a barrier for in vivo Zn(2+) imaging. In this work, an intramolecular charge transfer (ICT) fluorophore of smaller aromatic plane, 4-amino-7-nitro-2,1,3-benzoxadiazole, was adopted to construct visible light excited fluorescent Zn(2+) sensor, NBD-TPEA. This sensor demonstrates a visible ICT absorption band, a large Stokes shift, and biocompatibility. It emits weakly (Phi = 0.003) without pH dependence at pH 7.1-10.1, and the lambda(ex) and lambda(em) are 469 (epsilon(469) = 2.1 x 10(4) M(-1) cm(-1)) and 550 nm, respectively. The NBD-TPEA displays distinct selective Zn(2+)-amplified fluorescence (Phi = 0.046, epsilon(469) = 1.4 x 10(4) M(-1) cm(-1)) with emission shift from 550 to 534 nm, which can be ascribed to the synergic Zn(2+) coordination by the outer bis(pyridin-2-ylmethyl)amine (BPA) and 4-amine. The Zn(2+) binding ratio of NBD-TPEA is 1:1. By comparison with its analogues NBD-BPA and NBD-PMA, which have no Zn(2+) affinity, the outer BPA in NBD-TPEA should be responsible for the Zn(2+)-induced photoinduced electron transfer blockage as well as for the enhanced Zn(2+) binding ability of 4-amine. Successful intracellular Zn(2+) imaging on living cells with NBD-TPEA staining exhibited a preferential accumulation at lysosome and Golgi with dual excitability at either 458 or 488 nm. The intact in vivo Zn(2+) fluorescence imaging on zebrafish embryo or larva stained with NBD-TPEA revealed two zygomorphic luminescent areas around its ventricle which could be related to the Zn(2+) storage for the zebrafish development. Moreover, high Zn(2+) concentration in the developing neuromasters of zebrafish can be visualized by confocal fluorescence imaging. This study demonstrates a novel strategy to construct visible light excited Zn(2+) fluorescent sensor based on ICT fluorophore other than xanthenone analogues. Current data show that NBD-TPEA staining can be a reliable approach for the intact in vivo Zn(2+) imaging of zebrafish larva as well as for the clarification of subcellular distribution of Zn(2+) in vitro.
Intelligent Network-Centric Sensors Development Program
2012-07-31
Image sensor Configuration: ; Cone 360 degree LWIR PFx Sensor: •■. Image sensor . Configuration: Image MWIR Configuration; Cone 360 degree... LWIR PFx Sensor: Video Configuration: Cone 360 degree SW1R, 2. Reasoning Process to Match Sensor Systems to Algorithms The ontological...effects of coherent imaging because of aberrations. Another reason is the specular nature of active imaging. Both contribute to the nonuniformity
A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.
1993-01-01
A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.
Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.
2016-01-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941
Kelly, Caroline A; Cruz-Romero, Malco; Kerry, Joseph P; Papkovsky, Dmitri P
2018-05-02
The commercially-available optical oxygen-sensing system Optech-O₂ Platinum was applied to nondestructively assess the in situ performance of bulk, vacuum-packaged raw beef in three ~300 kg containers. Twenty sensors were attached to the inner surface of the standard bin-contained laminate bag (10 on the front and back sides), such that after filling with meat and sealing under vacuum, the sensors were accessible for optical interrogation with the external reader device. After filling and sealing each bag, the sensors were measured repetitively and nondestructively over a 15-day storage period at 1 °C, thus tracking residual oxygen distribution in the bag and changes during storage. The sensors revealed a number of unidentified meat quality and processing issues, and helped to improve the packaging process by pouring flakes of dry ice into the bag. Sensor utility in mapping the distribution of residual O₂ in sealed bulk containers and optimising and improving the packaging process, including handling and storage of bulk vacuum-packaged meat bins, was evident.
Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror
2016-10-01
To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.
Cameras for digital microscopy.
Spring, Kenneth R
2013-01-01
This chapter reviews the fundamental characteristics of charge-coupled devices (CCDs) and related detectors, outlines the relevant parameters for their use in microscopy, and considers promising recent developments in the technology of detectors. Electronic imaging with a CCD involves three stages--interaction of a photon with the photosensitive surface, storage of the liberated charge, and readout or measurement of the stored charge. The most demanding applications in fluorescence microscopy may require as much as four orders of greater magnitude sensitivity. The image in the present-day light microscope is usually acquired with a CCD camera. The CCD is composed of a large matrix of photosensitive elements (often referred to as "pixels" shorthand for picture elements, which simultaneously capture an image over the entire detector surface. The light-intensity information for each pixel is stored as electronic charge and is converted to an analog voltage by a readout amplifier. This analog voltage is subsequently converted to a numerical value by a digitizer situated on the CCD chip, or very close to it. Several (three to six) amplifiers are required for each pixel, and to date, uniform images with a homogeneous background have been a problem because of the inherent difficulties of balancing the gain in all of the amplifiers. Complementary metal oxide semiconductor sensors also exhibit relatively high noise associated with the requisite high-speed switching. Both of these deficiencies are being addressed, and sensor performance is nearing that required for scientific imaging. Copyright © 1998 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cardille, J. A.; Crowley, M.; Fortin, J. A.; Lee, J.; Perez, E.; Sleeter, B. M.; Thau, D.
2016-12-01
With the opening of the Landsat archive, researchers have a vast new data source teeming with imagery and potential. Beyond Landsat, data from other sensors is newly available as well: these include ALOS/PALSAR, Sentinel-1 and -2, MERIS, and many more. Google Earth Engine, developed to organize and provide analysis tools for these immense data sets, is an ideal platform for researchers trying to sift through huge image stacks. It offers nearly unlimited processing power and storage with a straightforward programming interface. Yet labeling land-cover change through time remains challenging given the current state of the art for interpreting remote sensing image sequences. Moreover, combining data from very different image platforms remains quite difficult. To address these challenges, we developed the BULC algorithm (Bayesian Updating of Land Cover), designed for the continuous updating of land-cover classifications through time in large data sets. The algorithm ingests data from any of the wide variety of earth-resources sensors; it maintains a running estimate of land-cover probabilities and the most probable class at all time points along a sequence of events. Here we compare BULC results from two study sites that witnessed considerable forest change in the last 40 years: the Pacific Northwest of the United States and the Mato Grosso region of Brazil. In Brazil, we incorporated rough classifications from more than 100 images of varying quality, mixing imagery from more than 10 different sensors. In the Pacific Northwest, we used BULC to identify forest changes due to logging and urbanization from 1973 to the present. Both regions had classification sequences that were better than many of the component days, effectively ignoring clouds and other unwanted noise while fusing the information contained on several platforms. As we leave remote sensing's data-poor era and enter a period with multiple looks at Earth's surface from multiple sensors over a short period of time, the BULC algorithm can help to sift through images of varying quality in Google Earth Engine to extract the most useful information for mapping the state and history of Earth's land cover.
NASA Astrophysics Data System (ADS)
Cardille, J. A.
2015-12-01
With the opening of the Landsat archive, researchers have a vast new data source teeming with imagery and potential. Beyond Landsat, data from other sensors is newly available as well: these include ALOS/PALSAR, Sentinel-1 and -2, MERIS, and many more. Google Earth Engine, developed to organize and provide analysis tools for these immense data sets, is an ideal platform for researchers trying to sift through huge image stacks. It offers nearly unlimited processing power and storage with a straightforward programming interface. Yet labeling forest change through time remains challenging given the current state of the art for interpreting remote sensing image sequences. Moreover, combining data from very different image platforms remains quite difficult. To address these challenges, we developed the BULC algorithm (Bayesian Updating of Land Cover), designed for the continuous updating of land-cover classifications through time in large data sets. The algorithm ingests data from any of the wide variety of earth-resources sensors; it maintains a running estimate of land-cover probabilities and the most probable class at all time points along a sequence of events. Here we compare BULC results from two study sites that witnessed considerable forest change in the last 40 years: the Pacific Northwest of the United States and the Mato Grosso region of Brazil. In Brazil, we incorporated rough classifications from more than 100 images of varying quality, mixing imagery from more than 10 different sensors. In the Pacific Northwest, we used BULC to identify forest changes due to logging and urbanization from 1973 to the present. Both regions had classification sequences that were better than many of the component days, effectively ignoring clouds and other unwanted signal while fusing the information contained on several platforms. As we leave remote sensing's data-poor era and enter a period with multiple looks at Earth's surface from multiple sensors over a short period of time, this algorithm may help to sift through images of varying quality in Google Earth Engine to extract the most useful information for mapping.
Integrating Sensor-Collected Intelligence
2008-11-01
collecting, processing, data storage and fusion, and the dissemination of information collected by Intelligence, Surveillance, and Reconnaissance (ISR...Grid – Bandwidth Expansion (GIG-BE) program) to provide the capability to transfer data from sensors to accessible storage and satellite and airborne...based ISR is much more fragile. There was a purposeful drawdown of these systems following the Cold War and modernization programs were planned to
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
21 CFR 892.2010 - Medical image storage device.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Medical image storage device. 892.2010 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.2010 Medical image storage device. (a) Identification. A medical image storage device is a device that provides electronic storage and retrieval...
21 CFR 892.2010 - Medical image storage device.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Medical image storage device. 892.2010 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.2010 Medical image storage device. (a) Identification. A medical image storage device is a device that provides electronic storage and retrieval...
21 CFR 892.2010 - Medical image storage device.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Medical image storage device. 892.2010 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.2010 Medical image storage device. (a) Identification. A medical image storage device is a device that provides electronic storage and retrieval...
21 CFR 892.2010 - Medical image storage device.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Medical image storage device. 892.2010 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.2010 Medical image storage device. (a) Identification. A medical image storage device is a device that provides electronic storage and retrieval...
21 CFR 892.2010 - Medical image storage device.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Medical image storage device. 892.2010 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.2010 Medical image storage device. (a) Identification. A medical image storage device is a device that provides electronic storage and retrieval...
NASA Astrophysics Data System (ADS)
Cruz, Febus Reidj G.; Padilla, Dionis A.; Hortinela, Carlos C.; Bucog, Krissel C.; Sarto, Mildred C.; Sia, Nirlu Sebastian A.; Chung, Wen-Yaw
2017-02-01
This study is about the determination of moisture content of milled rice using image processing technique and perceptron neural network algorithm. The algorithm involves several inputs that produces an output which is the moisture content of the milled rice. Several types of milled rice are used in this study, namely: Jasmine, Kokuyu, 5-Star, Ifugao, Malagkit, and NFA rice. The captured images are processed using MATLAB R2013a software. There is a USB dongle connected to the router which provided internet connection for online web access. The GizDuino IOT-644 is used for handling the temperature and humidity sensor, and for sending and receiving of data from computer to the cloud storage. The result is compared to the actual moisture content range using a moisture tester for milled rice. Based on results, this study provided accurate data in determining the moisture content of the milled rice.
Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)
NASA Astrophysics Data System (ADS)
Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy
2013-05-01
GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe
2017-01-01
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian
2017-07-18
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.
Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S
2016-09-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sensor system for fuel transport vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earl, Dennis Duncan; McIntyre, Timothy J.; West, David L.
An exemplary sensor system for a fuel transport vehicle can comprise a fuel marker sensor positioned between a fuel storage chamber of the vehicle and an access valve for the fuel storage chamber of the vehicle. The fuel marker sensor can be configured to measure one or more characteristics of one or more fuel markers present in the fuel adjacent the sensor, such as when the marked fuel is unloaded at a retail station. The one or more characteristics can comprise concentration and/or identity of the one or more fuel markers in the fuel. Based on the measured characteristics ofmore » the one or more fuel markers, the sensor system can identify the fuel and/or can determine whether the fuel has been adulterated after the marked fuel was last measured, such as when the marked fuel was loaded into the vehicle.« less
NASA Technical Reports Server (NTRS)
Chang, Chen J. (Inventor); Liaghati, Jr., Amir L. (Inventor); Liaghati, Mahsa L. (Inventor)
2018-01-01
Methods and apparatus are provided for telemetry processing using a telemetry processor. The telemetry processor can include a plurality of communications interfaces, a computer processor, and data storage. The telemetry processor can buffer sensor data by: receiving a frame of sensor data using a first communications interface and clock data using a second communications interface, receiving an end of frame signal using a third communications interface, and storing the received frame of sensor data in the data storage. After buffering the sensor data, the telemetry processor can generate an encapsulated data packet including a single encapsulated data packet header, the buffered sensor data, and identifiers identifying telemetry devices that provided the sensor data. A format of the encapsulated data packet can comply with a Consultative Committee for Space Data Systems (CCSDS) standard. The telemetry processor can send the encapsulated data packet using a fourth and a fifth communications interfaces.
NASA Astrophysics Data System (ADS)
Guggenheim, James A.; Zhang, Edward Z.; Beard, Paul C.
2017-03-01
The planar Fabry-Pérot (FP) sensor provides high quality photoacoustic (PA) images but beam walk-off limits sensitivity and thus penetration depth to ≍1 cm. Planoconcave microresonator sensors eliminate beam walk-off enabling sensitivity to be increased by an order-of-magnitude whilst retaining the highly favourable frequency response and directional characteristics of the FP sensor. The first tomographic PA images obtained in a tissue-realistic phantom using the new sensors are described. These show that the microresonator sensors provide near identical image quality as the planar FP sensor but with significantly greater penetration depth (e.g. 2-3cm) due to their higher sensitivity. This offers the prospect of whole body small animal imaging and clinical imaging to depths previously unattainable using the FP planar sensor.
NASA Astrophysics Data System (ADS)
Resnick, Michael Murray
Surface exploration of the Moon and Asteroids can provide important information to scientists regarding the origins of the solar-system and life . Small robots and sensor modules can enable low-cost surface exploration. In the near future, they are the main machines providing these answers. Advanced in electronics, sensors and actuators enable ever smaller platforms, with compromising functionality. However similar advances haven't taken place for power supplies and thermal control system. The lunar south pole has temperatures in the range of -100 to -150 °C. Similarly, asteroid surfaces can encounter temperatures of -150 °C. Most electronics and batteries do not work below -40 °C. An effective thermal control system is critical towards making small robots and sensors module for extreme environments feasible. In this work, the feasibility of using thermochemical storage materials as a possible thermal control solution is analyzed for small robots and sensor modules for lunar and asteroid surface environments. The presented technology will focus on using resources that is readily generated as waste product aboard a spacecraft or is available off-world through In-Situ Resource Utilization (ISRU). In this work, a sensor module for extreme environment has been designed and prototyped. Our intention is to have a network of tens or hundreds of sensor modules that can communicate and interact with each other while also gathering science data. The design contains environmental sensors like temperature sensors and IMU (containing accelerometer, gyro and magnetometer) to gather data. The sensor module would nominally contain an electrical heater and insulation. The thermal heating effect provided by this active heater is compared with the proposed technology that utilizes thermochemical storage chemicals. Our results show that a thermochemical storage-based thermal control system is feasible for use in extreme temperatures. A performance increase of 80% is predicted for the sensor modules on the asteroid Eros using thermochemical based storage system. At laboratory level, a performance increase of 8 to 9 % is observed at ambient temperatures of -32°C and -40 °C.
On-board multispectral classification study
NASA Technical Reports Server (NTRS)
Ewalt, D.
1979-01-01
The factors relating to onboard multispectral classification were investigated. The functions implemented in ground-based processing systems for current Earth observation sensors were reviewed. The Multispectral Scanner, Thematic Mapper, Return Beam Vidicon, and Heat Capacity Mapper were studied. The concept of classification was reviewed and extended from the ground-based image processing functions to an onboard system capable of multispectral classification. Eight different onboard configurations, each with varying amounts of ground-spacecraft interaction, were evaluated. Each configuration was evaluated in terms of turnaround time, onboard processing and storage requirements, geometric and classification accuracy, onboard complexity, and ancillary data required from the ground.
NASA Technical Reports Server (NTRS)
Katz, Y. H.
1973-01-01
Visual tracking performance in instrumentation is discussed together with photographic pyrometry in an aeroballistic range, optical characteristics of spherical vapor bubbles in liquids, and the automatic detection and control of surface roughness by coherent diffraction patterns. Other subjects explored are related to instruments, sensors, systems, holography, and pattern recognition. Questions of data handling are also investigated, taking into account minicomputer image storage for holographic interferometry analysis, the design of a video amplifier for a 90 MHz bandwidth, and autostereoscopic screens. Individual items are announced in this issue.
The applicability of frame imaging from a spinning spacecraft. Volume 1: Summary report
NASA Technical Reports Server (NTRS)
Botticelli, R. A.; Johnson, R. O.; Wallmark, G. N.
1973-01-01
A detailed study was made of frame-type imaging systems for use on board a spin stabilized spacecraft for outer planets applications. All types of frame imagers capable of performing this mission were considered, regardless of the current state of the art. Detailed sensor models of these systems were developed at the component level and used in the subsequent analyses. An overall assessment was then made of the various systems based upon results of a worst-case performance analysis, foreseeable technology problems, and the relative reliability and radiation tolerance of the systems. Special attention was directed at restraints imposed by image motion and the limited data transmission and storage capability of the spacecraft. Based upon this overall assessment, the most promising systems were selected and then examined in detail for a specified Jupiter orbiter mission. The relative merits of each selected system were then analyzed, and the system design characteristics were demonstrated using preliminary configurations, block diagrams, and tables of estimated weights, volumes and power consumption.
NASA Technical Reports Server (NTRS)
Hovis, W.; Smith, D.; Mcculloch, A.; Goldberg, I. L.; Ostrow, H.; Seidenberg, B.
1973-01-01
Examples of contamination of sensors from various sources during space missions are presented. Design precautions to provide access to optical surfaces and venting of outgassing products are recommended as methods for coping with contamination. The effects of the sensor materials on sensor contamination are analyzed. Actions to be taken during transportation, storage, and testing of sensors to avoid contamination are discussed.
Carbon Nanotube based Nanotechnolgy
NASA Astrophysics Data System (ADS)
Meyyappan, M.
2000-10-01
Carbon nanotube(CNT) was discovered in the early 1990s and is an off-spring of C60(the fullerene or buckyball). CNT, depending on chirality and diameter, can be metallic or semiconductor and thus allows formation of metal-semiconductor and semiconductor-semiconductor junctions. CNT exhibits extraordinary electrical and mechanical properties and offers remarkable potential for revolutionary applications in electronics devices, computing and data storage technology, sensors, composites, storage of hydrogen or lithium for battery development, nanoelectromechanical systems(NEMS), and as tip in scanning probe microscopy(SPM) for imaging and nanolithography. Thus the CNT synthesis, characterization and applications touch upon all disciplines of science and engineering. A common growth method now is based on CVD though surface catalysis is key to synthesis, in contrast to many CVD applications common in microelectronics. A plasma based variation is gaining some attention. This talk will provide an overview of CNT properties, growth methods, applications, and research challenges and opportunities ahead.
Digital radiography and caries diagnosis.
Wenzel, A
1998-01-01
Direct digital acquisition of intra-oral radiographs has been possible only in the last decade. Several studies have shown that, theoretically, there are a number of advantages of direct digital radiography compared with conventional film. Laboratory as well as controlled clinical studies are needed to determine whether new digital imaging systems alter diagnosis, treatment and prognosis compared with conventional methods. Most studies so far have evaluated their diagnostic performance only in laboratory settings. This review concentrates on what evidence we have for the diagnostic efficacy of digital systems for caries detection. Digital systems are compared with film and those studies which have evaluated the effects on diagnostic accuracy of contrast and edge enhancement, image size, variations in radiation dose and image compression are reviewed together with the use of automated image analysis for caries diagnosis. Digital intra-oral radiographic systems seem to be as accurate as the currently available dental films for the detection of caries. Sensitivities are relatively high (0.6-0.8) for detection of occlusal lesions into dentine with false positive fractions of 5-10%. A radiolucency in dentine is recognised as a good predictor for demineralisation. Radiography is of no value for the detection of initial (enamel) occlusal lesions. For detection of approximal dentinal lesions, sensitivities, specificities as well as the predictive values are fair, but are very poor for lesions known to be confined to enamel. Very little documented information exists, however, on the utilization of digital systems in the clinic. It is not known whether dose is actually reduced with the storage phosphor system, or whether collimator size is adjusted to fit sensor size in the CCD-based systems. There is no evidence that the number of retakes have been reduced. It is not known how many images are needed with the various CCD systems when compared with a conventional bitewing, nor how stable these systems are in the daily clinical use or whether proper cross-infection control can be maintained in relation to scanning the storage phosphor plates and the sensors and the cable. There is only sparse evidence that the enhancement facilities are used when interpreting images, and none that this has changed working practices or treatment decisions. The economic consequences for the patient, dentist and society require examination.
Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.
1993-01-01
The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, Julian; Tate, Mark W.; Shanks, Katherine S.
Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less
In-network Coding for Resilient Sensor Data Storage and Efficient Data Mule Collection
NASA Astrophysics Data System (ADS)
Albano, Michele; Gao, Jie
In a sensor network of n nodes in which k of them have sensed interesting data, we perform in-network erasure coding such that each node stores a linear combination of all the network data with random coefficients. This scheme greatly improves data resilience to node failures: as long as there are k nodes that survive an attack, all the data produced in the sensor network can be recovered with high probability. The in-network coding storage scheme also improves data collection rate by mobile mules and allows for easy scheduling of data mules.
Electrochemical Impedance Sensors for Monitoring Trace Amounts of NO3 in Selected Growing Media.
Ghaffari, Seyed Alireza; Caron, William-O; Loubier, Mathilde; Normandeau, Charles-O; Viens, Jeff; Lamhamedi, Mohammed S; Gosselin, Benoit; Messaddeq, Younes
2015-07-21
With the advent of smart cities and big data, precision agriculture allows the feeding of sensor data into online databases for continuous crop monitoring, production optimization, and data storage. This paper describes a low-cost, compact, and scalable nitrate sensor based on electrochemical impedance spectroscopy for monitoring trace amounts of NO3- in selected growing media. The nitrate sensor can be integrated to conventional microelectronics to perform online nitrate sensing continuously over a wide concentration range from 0.1 ppm to 100 ppm, with a response time of about 1 min, and feed data into a database for storage and analysis. The paper describes the structural design, the Nyquist impedance response, the measurement sensitivity and accuracy, and the field testing of the nitrate sensor performed within tree nursery settings under ISO/IEC 17025 certifications.
Electrochemical Impedance Sensors for Monitoring Trace Amounts of NO3 in Selected Growing Media
Ghaffari, Seyed Alireza; Caron, William-O.; Loubier, Mathilde; Normandeau, Charles-O.; Viens, Jeff; Lamhamedi, Mohammed S.; Gosselin, Benoit; Messaddeq, Younes
2015-01-01
With the advent of smart cities and big data, precision agriculture allows the feeding of sensor data into online databases for continuous crop monitoring, production optimization, and data storage. This paper describes a low-cost, compact, and scalable nitrate sensor based on electrochemical impedance spectroscopy for monitoring trace amounts of NO3− in selected growing media. The nitrate sensor can be integrated to conventional microelectronics to perform online nitrate sensing continuously over a wide concentration range from 0.1 ppm to 100 ppm, with a response time of about 1 min, and feed data into a database for storage and analysis. The paper describes the structural design, the Nyquist impedance response, the measurement sensitivity and accuracy, and the field testing of the nitrate sensor performed within tree nursery settings under ISO/IEC 17025 certifications. PMID:26197322
Robotic Vehicle Communications Interoperability
1988-08-01
starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor
Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock.
Gerke, Kirill M; Karsanina, Marina V; Mallants, Dirk
2015-11-02
Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing.
Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock
Gerke, Kirill M.; Karsanina, Marina V.; Mallants, Dirk
2015-01-01
Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing. PMID:26522938
Stearns, Daniel G.; Vernon, Stephen P.; Ceglio, Natale M.; Hawryluk, Andrew M.
1999-01-01
A magnetoresistive sensor element with a three-dimensional micro-architecture is capable of significantly improved sensitivity and highly localized measurement of magnetic fields. The sensor is formed of a multilayer film of alternately magnetic and nonmagnetic materials. The sensor is optimally operated in a current perpendicular to plane mode. The sensor is useful in magnetic read/write heads, for high density magnetic information storage and retrieval.
Large space structures control algorithm characterization
NASA Technical Reports Server (NTRS)
Fogel, E.
1983-01-01
Feedback control algorithms are developed for sensor/actuator pairs on large space systems. These algorithms have been sized in terms of (1) floating point operation (FLOP) demands; (2) storage for variables; and (3) input/output data flow. FLOP sizing (per control cycle) was done as a function of the number of control states and the number of sensor/actuator pairs. Storage for variables and I/O sizing was done for specific structure examples.
Compression in wearable sensor nodes: impacts of node topology.
Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther
2014-04-01
Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.
An infrared/video fusion system for military robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.W.; Roberts, R.S.
1997-08-05
Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less
Validation of Spaceborne Radar Surface Water Mapping with Optical sUAS Images
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Murnaghan, K.; Sherman, D.; Poncos, V.; Brisco, B.; Armenakis, C.
2015-08-01
The Canada Centre for Remote Sensing (CCRS) has over 40 years of experience with airborne and spaceborne sensors and is now starting to use small Unmanned Aerial Systems (sUAS) to validate products from large coverage area sensors and create new methodologies for very high resolution products. Wetlands have several functions including water storage and retention which can reduce flooding and provide continuous flow for hydroelectric generation and irrigation for agriculture. Synthetic Aperture Radar is well suited as a tool for monitoring surface water by supplying acquisitions irrespective of cloud cover or time of day. Wetlands can be subdivided into three classes: open water, flooded vegetation and upland which can vary seasonally with time and water level changes. RADARSAT-2 data from the Wide-Ultra Fine, Spotlight and Fine Quad-Pol modes has been used to map the open water in the Peace-Athabasca Delta, Alberta using intensity thresholding. We also use spotlight modes for higher resolution and the fully polarimetric mode (FQ) for polarimetric decomposition. Validation of these products will be done using a low altitude flying sUAS to generate optical georeferenced images. This project provides methodologies which could be used for flood mapping as well as ecological monitoring.
NASA Tech Briefs, January 2004
NASA Technical Reports Server (NTRS)
2004-01-01
Topics covered include: Multisensor Instrument for Real-Time Biological Monitoring; Sensor for Monitoring Nanodevice-Fabrication Plasmas; Backed Bending Actuator; Compact Optoelectronic Compass; Micro Sun Sensor for Spacecraft; Passive IFF: Autonomous Nonintrusive Rapid Identification of Friendly Assets; Finned-Ladder Slow-Wave Circuit for a TWT; Directional Radio-Frequency Identification Tag Reader; Integrated Solar-Energy-Harvesting and -Storage Device; Event-Driven Random-Access-Windowing CCD Imaging System; Stroboscope Controller for Imaging Helicopter Rotors; Software for Checking State-charts; Program Predicts Broadband Noise from a Turbofan Engine; Protocol for a Delay-Tolerant Data-Communication Network; Software Implements a Space-Mission File-Transfer Protocol; Making Carbon-Nanotube Arrays Using Block Copolymers: Part 2; Modular Rake of Pitot Probes; Preloading To Accelerate Slow-Crack-Growth Testing; Miniature Blimps for Surveillance and Collection of Samples; Hybrid Automotive Engine Using Ethanol-Burning Miller Cycle; Fabricating Blazed Diffraction Gratings by X-Ray Lithography; Freeze-Tolerant Condensers; The StarLight Space Interferometer; Champagne Heat Pump; Controllable Sonar Lenses and Prisms Based on ERFs; Measuring Gravitation Using Polarization Spectroscopy; Serial-Turbo-Trellis-Coded Modulation with Rate-1 Inner Code; Enhanced Software for Scheduling Space-Shuttle Processing; Bayesian-Augmented Identification of Stars in a Narrow View; Spacecraft Orbits for Earth/Mars-Lander Radio Relay; and Self-Inflatable/Self-Rigidizable Reflectarray Antenna.
CMOS Image Sensors: Electronic Camera On A Chip
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.
Image processing operations achievable with the Microchannel Spatial Light Modulator
NASA Astrophysics Data System (ADS)
Warde, C.; Fisher, A. D.; Thackara, J. I.; Weiss, A. M.
1980-01-01
The Microchannel Spatial Light Modulator (MSLM) is a versatile, optically-addressed, highly-sensitive device that is well suited for low-light-level, real-time, optical information processing. It consists of a photocathode, a microchannel plate (MCP), a planar acceleration grid, and an electro-optic plate in proximity focus. A framing rate of 20 Hz with full modulation depth, and 100 Hz with 20% modulation depth has been achieved in a vacuum-demountable LiTaO3 device. A halfwave exposure sensitivity of 2.2 mJ/sq cm and an optical information storage time of more than 2 months have been achieved in a similar gridless LiTaO3 device employing a visible photocathode. Image processing operations such as analog and digital thresholding, real-time image hard clipping, contrast reversal, contrast enhancement, image addition and subtraction, and binary-level logic operations such as AND, OR, XOR, and NOR can be achieved with this device. This collection of achievable image processing characteristics makes the MSLM potentially useful for a number of smart sensor applications.
NASA Astrophysics Data System (ADS)
Wang, Kai; Ou, Hai; Chen, Jun
2015-06-01
Since its emergence a decade ago, amorphous silicon flat panel X-ray detector has established itself as a ubiquitous platform for an array of digital radiography modalities. The fundamental building block of a flat panel detector is called a pixel. In all current pixel architectures, sensing, storage, and readout are unanimously kept separate, inevitably compromising resolution by increasing pixel size. To address this issue, we hereby propose a “smart” pixel architecture where the aforementioned three components are combined in a single dual-gate photo thin-film transistor (TFT). In other words, the dual-gate photo TFT itself functions as a sensor, a storage capacitor, and a switch concurrently. Additionally, by harnessing the amplification effect of such a thin-film transistor, we for the first time created a single-transistor active pixel sensor. The proof-of-concept device had a W/L ratio of 250μm/20μm and was fabricated using a simple five-mask photolithography process, where a 130nm transparent ITO was used as the top photo gate, and a 200nm amorphous silicon as the absorbing channel layer. The preliminary results demonstrated that the photocurrent had been increased by four orders of magnitude due to light-induced threshold voltage shift in the sub-threshold region. The device sensitivity could be simply tuned by photo gate bias to specifically target low-level light detection. The dependence of threshold voltage on light illumination indicated that a dynamic range of at least 80dB could be achieved. The "smart" pixel technology holds tremendous promise for developing high-resolution and low-dose X-ray imaging and may potentially lower the cancer risk imposed by radiation, especially among paediatric patients.
Multi-MGy Radiation Hardened Camera for Nuclear Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef
There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO{sub 2}) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations ofmore » the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera electronics will be exposed. Another important element of the camera is the optical system that transports the image from the scene to the image sensor. This arrangement of glass-based lenses is affected by radiations through two mechanisms: the radiation induced absorption and the radiation induced refractive index changes. The first one will limit the signal to noise ratio of the image whereas the second one will directly affect the resolution of the camera. We'll present at the conference a coupled simulation/experiment study of these effects for various commercial glasses and present vulnerability study of typical optical systems to radiations at MGy doses. The last very important part of the camera is the illumination system that can be based on various technologies of emitting devices like LED, SLED or lasers. The most promising solutions for high radiation doses will be presented at the conference. In addition to this hardening-by-component approach, the global radiation tolerance of the camera can be drastically improve by working at the system level, combining innovative approaches eg. for the optical and illumination systems. We'll present at the conference the developed approach allowing to extend the camera lifetime up to the MGy dose range. (authors)« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-07
... INTERNATIONAL TRADE COMMISSION [Docket No. 2895] Certain CMOS Image Sensors and Products.... International Trade Commission has received a complaint entitled Certain CMOS Image Sensors and Products... importation, and the sale within the United States after importation of certain CMOS image sensors and...
Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor
Hirvonen, Liisa M.; Suhling, Klaus
2016-01-01
Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556
Evaluation and comparison of the IRS-P6 and the landsat sensors
Chander, G.; Coan, M.J.; Scaramuzza, P.L.
2008-01-01
The Indian Remote Sensing Satellite (IRS-P6), also called ResourceSat-1, was launched in a polar sun-synchronous orbit on October 17, 2003. It carries three sensors: the highresolution Linear Imaging Self-Scanner (LISS-IV), the mediumresolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide-Field Sensor (AWiFS). These three sensors provide images of different resolutions and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to images from the Landsat-5 Thematic Mapper (TM) and Landsat-7 Enhanced TM Plus (ETM+) sensors. The approach involves calibration of surface observations based on image statistics from areas observed nearly simultaneously by the two sensors. This paper also evaluated the viability of data from these nextgeneration imagers for use in creating three National Land Cover Dataset (NLCD) products: land cover, percent tree canopy, and percent impervious surface. Individual products were consistent with previous studies but had slightly lower overall accuracies as compared to data from the Landsat sensors.
Performance test and image correction of CMOS image sensor in radiation environment
NASA Astrophysics Data System (ADS)
Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang
2016-09-01
CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.
High speed three-dimensional laser scanner with real time processing
NASA Technical Reports Server (NTRS)
Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)
2008-01-01
A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.
CMOS Active-Pixel Image Sensor With Intensity-Driven Readout
NASA Technical Reports Server (NTRS)
Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina
1996-01-01
Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.
Travel guidance system for vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takanabe, K.; Yamamoto, M.; Ito, K.
1987-02-24
A travel guidance system is described for vehicles including: a heading sensor for detecting a direction of movement of a vehicle; a distance sensor for detecting a distance traveled by the vehicle; a map data storage medium preliminarily storing map data; a control unit for receiving a heading signal from the heading sensor and a distance signal from the distance sensor to successively compute a present position of the vehicle and for generating video signals corresponding to display data including map data from the map data storage medium and data of the present position; and a display having first andmore » second display portions and responsive to the video signals from the control unit to display on the first display portion a map and a present portion mark, in which: the map data storage medium comprises means for preliminarily storing administrative division name data and landmark data; and the control unit comprises: landmark display means for: (1) determining a landmark closest to the present position, (2) causing a position of the landmark to be displayed on the map and (3) retrieving a landmark massage concerning the landmark from the storage medium to cause the display to display the landmark message on the second display portion; division name display means for retrieving the name of an administrative division to which the present position belongs from the storage medium and causing the display to display a division name message on the second display portion; and selection means for selectively actuating at least one of the landmark display means and the division name display means.« less
Majdinasab, Marjan; Hosseini, Seyed Mohammad Hashem; Sepidname, Marziyeh; Negahdarifar, Manizheh; Li, Peiwu
2018-05-01
Alginate is a non-toxic, renewable, and linear copolymer obtained from the brown algae Laminaria digitata that can be easily shaped into beads. Its good gel forming properties have made it useful for entrapping food and pharmaceutical ingredients. In this study, alginate beads were used in a novel application as a colorimetric sensor in food intelligent packaging. Colorimetric sensor was developed through entrapping red cabbage extract as a pH indicator in alginate beads. The pH indicator beads were used in rainbow trout packaging for monitoring fillets spoilage. Color change of beads during fish storage was measured using the CIELab method. The alginate bead colorimetric sensor is validated by measuring total volatile basic nitrogen (TVB-N) levels and microbial populations in fish samples. Moreover, peroxide value (PV) and thiobarbituric acid reactive substances (TBARS) were evaluated during storage. Results indicated that increasing the bacterial population during storage and production of proteolytic enzymes resulted in protein degradation, accumulation of volatile amine compounds, increase in the pH and finally color change of alginate beads. The values of TVB-N, pH, PV and TBARS increased with time of storage. The results of TVB-N and microbial growth were in accordance with color change of beads and CIELab data. Therefore, the proposed system enjoys a high sensitivity to pH variations and is capable of monitoring the spoilage of fish or other protein-rich products through its wide range of color changes. The alginate beads containing the red cabbage extract can, thus, be used as a low-cost colorimetric sensor for intelligent packaging applications.
GaN-based THz advanced quantum cascade lasers for manned and unmanned systems
NASA Astrophysics Data System (ADS)
Anwar, A. F. M.; Manzur, Tariq; Lefebvre, Kevin R.; Carapezza, Edward M.
2009-09-01
In recent years the use of Unmanned Autonomous Vehicles (UAV) has seen a wider range of applications. However, their applications are restricted due to (a) advanced integrated sensing and processing electronics and (b) limited energy storage or on-board energy generation to name a few. The availability of a wide variety of sensing elements, operating at room temperatures, provides a great degree of flexibility with an extended application domain. Though sensors responding to a variable spectrum of input excitations ranging from (a) chemical, (b) biological, (c) atmospheric, (d) magnetic and (e) visual/IR imaging have been implemented in UAVs, the use of THz as a technology has not been implemented due to the absence of systems operating at room temperature. The integration of multi-phenomenological onboard sensors on small and miniature unmanned air vehicles will dramatically impact the detection and processing of challenging targets, such as humans carrying weapons or wearing suicide bomb vests. Unmanned air vehicles have the potential of flying over crowds of people and quickly discriminating non-threat humans from treat humans. The state of the art in small and miniature UAV's has progressed to vehicles of less than 1 pound in weight but with payloads of only a fraction of a pound. Uncooled IR sensors, such as amorphous silicon and vanadium oxide microbolometers with MRT's of less than 70mK and requiring power of less than 250mW, are available for integration into small UAV's. These sensors are responsive only up to approximately 14 microns and do not favorably compare with THz imaging systems for remotely detecting and classifying concealed weapons and bombs. In the following we propose the use of THz GaN-based QCL operating at room temperature as a possible alternative.
NASA Tech Briefs, January 2010
NASA Technical Reports Server (NTRS)
2010-01-01
Topics covered include: Cryogenic Flow Sensor; Multi-Sensor Mud Detection; Gas Flow Detection System; Mapping Capacitive Coupling Among Pixels in a Sensor Array; Fiber-Based Laser Transmitter for Oxygen A-Band Spectroscopy and Remote Sensing; Low-Profile, Dual-Wavelength, Dual-Polarized Antenna; Time-Separating Heating and Sensor Functions of Thermistors in Precision Thermal Control Applications; Cellular Reflectarray Antenna; A One-Dimensional Synthetic-Aperture Microwave Radiometer; Electrical Switching of Perovskite Thin-Film Resistors; Two-Dimensional Synthetic-Aperture Radiometer; Ethernet-Enabled Power and Communication Module for Embedded Processors; Electrically Variable Resistive Memory Devices; Improved Attachment in a Hybrid Inflatable Pressure Vessel; Electrostatic Separator for Beneficiation of Lunar Soil; Amorphous Rover; Space-Frame Antenna; Gear-Driven Turnbuckle Actuator; In-Situ Focusing Inside a Thermal Vacuum Chamber; Space-Frame Lunar Lander; Wider-Opening Dewar Flasks for Cryogenic Storage; Silicon Oxycarbide Aerogels for High-Temperature Thermal Insulation; Supercapacitor Electrolyte Solvents with Liquid Range Below -80 C; Designs and Materials for Better Coronagraph Occulting Masks; Fuel-Cell-Powered Vehicle with Hybrid Power Management; Fine-Water-Mist Multiple-Orientation-Discharge Fire Extinguisher; Fuel-Cell Water Separator; Turbulence and the Stabilization Principle; Improved Cloud Condensation Nucleus Spectrometer; Better Modeling of Electrostatic Discharge in an Insulator; Sub-Aperture Interferometers; Terahertz Mapping of Microstructure and Thickness Variations; Multiparallel Three-Dimensional Optical Microscopy; Stabilization of Phase of a Sinusoidal Signal Transmitted Over Optical Fiber; Vacuum-Compatible Wideband White Light and Laser Combiner Source System; Optical Tapers as White-Light WGM Resonators; EPR Imaging at a Few Megahertz Using SQUID Detectors; Reducing Field Distortion in Magnetic Resonance Imaging; Fluorogenic Cell-Based Biosensors for Monitoring Microbes; A Constant-Force Resistive Exercise Unit; GUI to Facilitate Research on Biological Damage from Radiation; On-Demand Urine Analyzer; More-Realistic Digital Modeling of a Human Body; and Advanced Liquid-Cooling Garment Using Highly Thermally Conductive Sheets.
Hempel, Andreas W; O'Sullivan, Maurice G; Papkovsky, Dmitri B; Kerry, Joseph P
2013-05-22
Optical oxygen sensors were used to ascertain the level of oxygen consumed by individual salad leaves for optimised packaging of ready-to-eat (RTE) Italian salad mixes during refrigerated storage. Seven commonly found leaves in Italian salad mixes were individually assessed for oxygen utilisation in packs. Each leaf showed varying levels of respiration throughout storage. Using the information obtained, an experimental salad mix was formulated (termed Mix 3) which consisted of the four slowest respiring salad leaves-Escarole, Frisee, Red Batavia, Lollo Rosso. Mix 3 was then compared against two commercially available Italian salads; Mix 1 (Escarole, Frisee, Radicchio, Lollo Rosso) and Mix 2 (Cos, Frisee, Radicchio, Lollo Rosso). Optical sensors were used to non-destructively monitor oxygen usage in all mixes throughout storage. In addition to oxygen consumption, all three salad mixes were quality assessed in terms of microbial load and sensorial acceptability. In conclusion, Mix 3 was found to consume the least amount of oxygen over time, had the lowest microbial load and was most sensorially preferred ( p < 0.05) in terms of overall appearance and acceptability. This study clearly shows the potential that oxygen sensors possess in terms of assisting in the optimised development of commercial RTE salad products.
Evaluation of Sun Glint Correction Algorithms for High-Spatial Resolution Hyperspectral Imagery
2012-09-01
ACRONYMS AND ABBREVIATIONS AISA Airborne Imaging Spectrometer for Applications AVIRIS Airborne Visible/Infrared Imaging Spectrometer BIL Band...sensor bracket mount combining Airborne Imaging Spectrometer for Applications ( AISA ) Eagle and Hawk sensors into a single imaging system (SpecTIR 2011...The AISA Eagle is a VNIR sensor with a wavelength range of approximately 400–970 nm and the AISA Hawk sensor is a SWIR sensor with a wavelength
Nanotechnology: Opportunities and Challenges
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya
2003-01-01
Nanotechnology seeks to exploit novel physical, chemical, biological, mechanical, electrical, and other properties, which arise primarily due to the nanoscale nature of certain materials. A key example is carbon nanotubes (CNTs) which exhibit unique electrical and extraordinary mechanical properties and offer remarkable potential for revolutionary applications in electronics devices, computing, and data storage technology, sensors, composites, nanoelectromechanical systems (NEMS), and as tip in scanning probe microscopy (SPM) for imaging and nanolithography. Thus the CNT synthesis, characterization, and applications touch upon all disciplines of science and engineering. This presentation will provide an overview and progress report on this and other major research candidates in Nanotechnology and address opportunities and challenges ahead.
Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980
NASA Astrophysics Data System (ADS)
Barbe, D. F.
1980-01-01
Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.
CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel.
Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun
2014-11-01
A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments.
Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types
NASA Astrophysics Data System (ADS)
Gehrke, S.; Beshah, B. T.
2016-06-01
Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.; Glicksman, Robert A.
1994-05-01
A Picture Archiving and Communications System (PACS) must be able to support the image rate of the medical treatment facility. In addition the PACS must have adequate working storage and archive storage capacity required. The calculation of the number of images per minute and the capacity of working storage and of archiving storage is discussed. The calculation takes into account the distribution of images over the different size of radiological images, the distribution between inpatient and outpatient, and the distribution over plain film CR images and other modality images. The support of the indirect clinical image load is difficult to estimate and is considered in some detail. The result of the exercise for a particular hospital is an estimate of the average size of the images and exams on the system, of the number of gigabytes of working storage, of the number of images moved per minute, of the size of the archive in gigabytes, and of the number of images that are to be moved by the archive per minute. The types of storage required to support the image rates and the capacity required are discussed.
Active pixel sensor array with multiresolution readout
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)
1999-01-01
An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.
Cui, Xiwang; Yan, Yong; Guo, Miao; Han, Xiaojuan; Hu, Yonghui
2016-01-01
Leak localization is essential for the safety and maintenance of storage vessels. This study proposes a novel circular acoustic emission sensor array to realize the continuous CO2 leak localization from a circular hole on the surface of a large storage vessel in a carbon capture and storage system. Advantages of the proposed array are analyzed and compared with the common sparse arrays. Experiments were carried out on a laboratory-scale stainless steel plate and leak signals were obtained from a circular hole in the center of this flat-surface structure. In order to reduce the influence of the ambient noise and dispersion of the acoustic wave on the localization accuracy, ensemble empirical mode decomposition is deployed to extract the useful leak signal. The time differences between the signals from the adjacent sensors in the array are calculated through correlation signal processing before estimating the corresponding distance differences between the sensors. A hyperbolic positioning algorithm is used to identify the location of the circular leak hole. Results show that the circular sensor array has very good directivity toward the circular leak hole. Furthermore, an optimized method is proposed by changing the position of the circular sensor array on the flat-surface structure or adding another circular sensor array to identify the direction of the circular leak hole. Experiential results obtained on a 100 cm × 100 cm stainless steel plate demonstrate that the full-scale error in the leak localization is within 0.6%. PMID:27869765
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.
Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K
2016-07-20
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.
Microwave Sensors for Breast Cancer Detection
2018-01-01
Breast cancer is the leading cause of death among females, early diagnostic methods with suitable treatments improve the 5-year survival rates significantly. Microwave breast imaging has been reported as the most potential to become the alternative or additional tool to the current gold standard X-ray mammography for detecting breast cancer. The microwave breast image quality is affected by the microwave sensor, sensor array, the number of sensors in the array and the size of the sensor. In fact, microwave sensor array and sensor play an important role in the microwave breast imaging system. Numerous microwave biosensors have been developed for biomedical applications, with particular focus on breast tumor detection. Compared to the conventional medical imaging and biosensor techniques, these microwave sensors not only enable better cancer detection and improve the image resolution, but also provide attractive features such as label-free detection. This paper aims to provide an overview of recent important achievements in microwave sensors for biomedical imaging applications, with particular focus on breast cancer detection. The electric properties of biological tissues at microwave spectrum, microwave imaging approaches, microwave biosensors, current challenges and future works are also discussed in the manuscript. PMID:29473867
Microwave Sensors for Breast Cancer Detection.
Wang, Lulu
2018-02-23
Breast cancer is the leading cause of death among females, early diagnostic methods with suitable treatments improve the 5-year survival rates significantly. Microwave breast imaging has been reported as the most potential to become the alternative or additional tool to the current gold standard X-ray mammography for detecting breast cancer. The microwave breast image quality is affected by the microwave sensor, sensor array, the number of sensors in the array and the size of the sensor. In fact, microwave sensor array and sensor play an important role in the microwave breast imaging system. Numerous microwave biosensors have been developed for biomedical applications, with particular focus on breast tumor detection. Compared to the conventional medical imaging and biosensor techniques, these microwave sensors not only enable better cancer detection and improve the image resolution, but also provide attractive features such as label-free detection. This paper aims to provide an overview of recent important achievements in microwave sensors for biomedical imaging applications, with particular focus on breast cancer detection. The electric properties of biological tissues at microwave spectrum, microwave imaging approaches, microwave biosensors, current challenges and future works are also discussed in the manuscript.
High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.
Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi
2010-12-15
A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.
CMOS image sensor-based implantable glucose sensor using glucose-responsive fluorescent hydrogel
Tokuda, Takashi; Takahashi, Masayuki; Uejima, Kazuhiro; Masuda, Keita; Kawamura, Toshikazu; Ohta, Yasumi; Motoyama, Mayumi; Noda, Toshihiko; Sasagawa, Kiyotaka; Okitsu, Teru; Takeuchi, Shoji; Ohta, Jun
2014-01-01
A CMOS image sensor-based implantable glucose sensor based on an optical-sensing scheme is proposed and experimentally verified. A glucose-responsive fluorescent hydrogel is used as the mediator in the measurement scheme. The wired implantable glucose sensor was realized by integrating a CMOS image sensor, hydrogel, UV light emitting diodes, and an optical filter on a flexible polyimide substrate. Feasibility of the glucose sensor was verified by both in vitro and in vivo experiments. PMID:25426316
Beam imaging sensor and method for using same
DOE Office of Scientific and Technical Information (OSTI.GOV)
McAninch, Michael D.; Root, Jeffrey J.
The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature. In another embodiment, the beam imaging sensor of the present invention comprises, among other things, a discontinuous partially circumferential slit. Also disclosed is amore » method for using the various beams sensor embodiments of the present invention.« less
Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy
2018-04-01
Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.
Jeong, Y J; Oh, T I; Woo, E J; Kim, K J
2017-07-01
Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.
Full-wave receiver architecture for the homodyne motion sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haugen, Peter C.; Dallum, Gregory E.; Welsh, Patrick A.
A homodyne motion sensor or detector based on ultra-wideband radar utilizes the entire received waveform through implementation of a voltage boosting receiver. The receiver includes a receiver input and a receiver output. A first diode is connected to the receiver output. A first charge storage capacitor is connected from between the first diode and the receiver output to ground. A second charge storage capacitor is connected between the receiver input and the first diode. A second diode is connected from between the second charge storage capacitor and the first diode to ground. The dual diode receiver performs voltage boosting ofmore » a RF signal received at the receiver input, thereby enhancing receiver sensitivity.« less
Full-wave receiver architecture for the homodyne motion sensor
Haugen, Peter C; Dallum, Gregory E; Welsh, Patrick A; Romero, Carlos E
2013-11-19
A homodyne motion sensor or detector based on ultra-wideband radar utilizes the entire received waveform through implementation of a voltage boosting receiver. The receiver includes a receiver input and a receiver output. A first diode is connected to the receiver output. A first charge storage capacitor is connected from between the first diode and the receiver output to ground. A second charge storage capacitor is connected between the receiver input and the first diode. A second diode is connected from between the second charge storage capacitor and the first diode to ground. The dual diode receiver performs voltage boosting of a RF signal received at the receiver input, thereby enhancing receiver sensitivity.
RE-DEFINING THE ROLES OF SENSORS IN OBJECTIVE PHYSICAL ACTIVITY MONITORING
Chen, Kong Y.; Janz, Kathleen F.; Zhu, Weimo; Brychta, Robert J.
2011-01-01
Background As physical activity researchers are increasingly using objective portable devices, this review describes current state of the technology to assess physical activity, with a focus on specific sensors and sensor properties currently used in monitors and their strengths and weakness. Additional sensors and sensor properties desirable for activity measurement and best practices for users and developers also are discussed. Best Practices We grouped current sensors into three broad categories for objectively measuring physical activity: associated body movement, physiology, and context. Desirable sensor properties for measuring physical activity and the importance of these properties in relationship to specific applications are addressed, and the specific roles of transducers and data acquisition systems within the monitoring devices are defined. Technical advancements in sensors, microcomputer processors, memory storage, batteries, wireless communication, and digital filters have made monitors more usable for subjects (smaller, more stable, and longer running time) and for researchers (less costly, higher time resolution and memory storage, shorter download time, and user-defined data features). Future Directions Users and developers of physical activity monitors should learn about the basic properties of their sensors, such as range, accuracy, precision, while considering the data acquisition/filtering steps that may be critical to data quality and may influence the desirable measurement outcome(s). PMID:22157770
Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.
Zhang, Jiachao; Hirakawa, Keigo
2017-04-01
This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.
Apparatus and method for a light direction sensor
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2011-01-01
The present invention provides a light direction sensor for determining the direction of a light source. The system includes an image sensor; a spacer attached to the image sensor, and a pattern mask attached to said spacer. The pattern mask has a slit pattern that as light passes through the slit pattern it casts a diffraction pattern onto the image sensor. The method operates by receiving a beam of light onto a patterned mask, wherein the patterned mask as a plurality of a slit segments. Then, diffusing the beam of light onto an image sensor and determining the direction of the light source.
Study the performance of star sensor influenced by space radiation damage of image sensor
NASA Astrophysics Data System (ADS)
Feng, Jie; Li, Yudong; Wen, Lin; Guo, Qi; Zhang, Xingyao
2018-03-01
Star sensor is an essential component of spacecraft attitude control system. Spatial radiation can cause star sensor performance degradation, abnormal work, attitude measurement accuracy and reliability reduction. Many studies have already been dedicated to the radiation effect on Charge-Coupled Device(CCD) image sensor, but fewer studies focus on the radiation effect of star sensor. The innovation of this paper is to study the radiation effects from the device level to the system level. The influence of the degradation of CCD image sensor radiation sensitive parameters on the performance parameters of star sensor is studied in this paper. The correlation among the radiation effect of proton, the non-uniformity noise of CCD image sensor and the performance parameter of star sensor is analyzed. This paper establishes a foundation for the study of error prediction and correction technology of star sensor on-orbit attitude measurement, and provides some theoretical basis for the design of high performance star sensor.
Development of a UAV system for VNIR-TIR acquisitions in precision agriculture
NASA Astrophysics Data System (ADS)
Misopolinos, L.; Zalidis, Ch.; Liakopoulos, V.; Stavridou, D.; Katsigiannis, P.; Alexandridis, T. K.; Zalidis, G.
2015-06-01
Adoption of precision agriculture techniques requires the development of specialized tools that provide spatially distributed information. Both flying platforms and airborne sensors are being continuously evolved to cover the needs of plant and soil sensing at affordable costs. Due to restrictions in payload, flying platforms are usually limited to carry a single sensor on board. The aim of this work is to present the development of a vertical take-off and landing autonomous unmanned aerial vehicle (VTOL UAV) system for the simultaneous acquisition of high resolution vertical images at the visible, near infrared (VNIR) and thermal infrared (TIR) wavelengths. A system was developed that has the ability to trigger two cameras simultaneously with a fully automated process and no pilot intervention. A commercial unmanned hexacopter UAV platform was optimized to increase reliability, ease of operation and automation. The designed systems communication platform is based on a reduced instruction set computing (RISC) processor running Linux OS with custom developed drivers in an efficient way, while keeping the cost and weight to a minimum. Special software was also developed for the automated image capture, data processing and on board data and metadata storage. The system was tested over a kiwifruit field in northern Greece, at flying heights of 70 and 100m above the ground. The acquired images were mosaicked and geo-corrected. Images from both flying heights were of good quality and revealed unprecedented detail within the field. The normalized difference vegetation index (NDVI) was calculated along with the thermal image in order to provide information on the accurate location of stressors and other parameters related to the crop productivity. Compared to other available sources of data, this system can provide low cost, high resolution and easily repeatable information to cover the requirements of precision agriculture.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors
Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.
2016-01-01
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643
Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David
2017-10-01
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Barié, Nicole; Bücking, Mark; Stahl, Ullrich; Rapp, Michael
2015-06-01
The use of polymer coated surface acoustic wave (SAW) sensor arrays is a very promising technique for highly sensitive and selective detection of volatile organic compounds (VOCs). We present new developments to achieve a low cost sensor setup with a sampling method enabling the highly reproducible detection of volatiles even in the ppb range. Since the VOCs of coffee are well known by gas chromatography (GC) research studies, the new sensor array was tested for an easy assessable objective: coffee ageing during storage. As reference method these changes were traced with a standard GC/FID set-up, accompanied by sensory panellists. The evaluation of GC data showed a non-linear characteristic for single compound concentrations as well as for total peak area values, disabling prediction of the coffee age. In contrast, the new SAW sensor array demonstrates a linear dependency, i.e. being capable to show a dependency between volatile concentration and storage time. Copyright © 2014 Elsevier Ltd. All rights reserved.
Advanced sensor-simulation capability
NASA Astrophysics Data System (ADS)
Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.
1990-09-01
This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.
Spaceborne imaging radar research in the 90's
NASA Technical Reports Server (NTRS)
Elachi, Charles
1986-01-01
The imaging radar experiments on SEASAT and on the space shuttle (SIR-A and SIR-B) have led to a wide interest in the use of spaceborne imaging radars in Earth and planetary sciences. The radar sensors provide unique and complimentary information to what is acquired with visible and infrared imagers. This includes subsurface imaging in arid regions, all weather observation of ocean surface dynamic phenomena, structural mapping, soil moisture mapping, stereo imaging and resulting topographic mapping. However, experiments up to now have exploited only a very limited range of the generic capability of radar sensors. With planned sensor developments in the late 80's and early 90's, a quantum jump will be made in our ability to fully exploit the potential of these sensors. These developments include: multiparameter research sensors such as SIR-C and X-SAR, long-term and global monitoring sensors such as ERS-1, JERS-1, EOS, Radarsat, GLORI and the spaceborne sounder, planetary mapping sensors such as the Magellan and Cassini/Titan mappers, topographic three-dimensional imagers such as the scanning radar altimeter and three-dimensional rain mapping. These sensors and their associated research are briefly described.
Star centroiding error compensation for intensified star sensors.
Jiang, Jie; Xiong, Kun; Yu, Wenbo; Yan, Jinyun; Zhang, Guangjun
2016-12-26
A star sensor provides high-precision attitude information by capturing a stellar image; however, the traditional star sensor has poor dynamic performance, which is attributed to its low sensitivity. Regarding the intensified star sensor, the image intensifier is utilized to improve the sensitivity, thereby further improving the dynamic performance of the star sensor. However, the introduction of image intensifier results in star centroiding accuracy decrease, further influencing the attitude measurement precision of the star sensor. A star centroiding error compensation method for intensified star sensors is proposed in this paper to reduce the influences. First, the imaging model of the intensified detector, which includes the deformation parameter of the optical fiber panel, is established based on the orthographic projection through the analysis of errors introduced by the image intensifier. Thereafter, the position errors at the target points based on the model are obtained by using the Levenberg-Marquardt (LM) optimization method. Last, the nearest trigonometric interpolation method is presented to compensate for the arbitrary centroiding error of the image plane. Laboratory calibration result and night sky experiment result show that the compensation method effectively eliminates the error introduced by the image intensifier, thus remarkably improving the precision of the intensified star sensors.
McAninch, Michael D.; Root, Jeffrey J.
2016-07-05
The present invention relates generally to the field of sensors for beam imaging and, in particular, to a new and useful beam imaging sensor for use in determining, for example, the power density distribution of a beam including, but not limited to, an electron beam or an ion beam. In one embodiment, the beam imaging sensor of the present invention comprises, among other items, a circumferential slit that is either circular, elliptical or polygonal in nature.
Development of a novel omnidirectional magnetostrictive transducer for plate applications
NASA Astrophysics Data System (ADS)
Vinogradov, Sergey; Cobb, Adam; Bartlett, Jonathan; Udagawa, Youichi
2018-04-01
The application of guided waves for the testing of plate-type structures has been recently investigated by a number of research groups due to the ability of guided waves to detect corrosion in remote and hidden areas. Guided wave sensors for plate applications can be either directed (i.e., the waves propagate in a single direction) or omnidirectional. Each type has certain advantages and disadvantages. Omnidirectional sensors can inspect large areas from a single location, but it is challenging to define where a feature is located. Conversely, directed sensors can be used to precisely locate an indication, but have no sensitivity to flaws away from the wave propagation direction. This work describes a newly developed sensor that combines the strengths of both sensor types to create a novel omnidirectional transducer. The sensor transduction is based on a custom magnetostrictive transducer (MsT). In this new probe design, a directed, plate-application MsT with known characteristics was incorporated into an automated scanner. This scanner rotates the directed MsT for data collection at regular intervals. Coupling of the transducer to the plate is accomplished using a shear wave couplant. The array of data that is received is used for compiling B-scans and imaging, utilizing a synthetic aperture focusing algorithm (SAFT). The performance of the probe was evaluated on a 0.5-inch thick carbon steel plate mockup with a surface area of over 100 square feet. The mockup had a variety of known anomalies representing localized and distributed pitting corrosion, gradual wall thinning, and notches of different depths. Experimental data was also acquired using the new probe on a retired storage tank with known corrosion damage. The performance of the new sensor and its limitations are discussed together with general directions in technology development.
Multiplexed image storage by electromagnetically induced transparency in a solid
NASA Astrophysics Data System (ADS)
Heinze, G.; Rentzsch, N.; Halfmann, T.
2012-11-01
We report on frequency- and angle-multiplexed image storage by electromagnetically induced transparency (EIT) in a Pr3+:Y2SiO5 crystal. Frequency multiplexing by EIT relies on simultaneous storage of light pulses in atomic coherences, driven in different frequency ensembles of the inhomogeneously broadened solid medium. Angular multiplexing by EIT relies on phase matching of the driving laser beams, which permits simultaneous storage of light pulses propagating under different angles into the crystal. We apply the multiplexing techniques to increase the storage capacity of the EIT-driven optical memory, in particular to implement multiplexed storage of larger two-dimensional amounts of data (images). We demonstrate selective storage and readout of images by frequency-multiplexed EIT and angular-multiplexed EIT, as well as the potential to combine both multiplexing approaches towards further enhanced storage capacities.
Multiphase imaging of gas flow in a nanoporous material using remote-detection NMR
NASA Astrophysics Data System (ADS)
Harel, Elad; Granwehr, Josef; Seeley, Juliette A.; Pines, Alex
2006-04-01
Pore structure and connectivity determine how microstructured materials perform in applications such as catalysis, fluid storage and transport, filtering or as reactors. We report a model study on silica aerogel using a time-of-flight magnetic resonance imaging technique to characterize the flow field and explain the effects of heterogeneities in the pore structure on gas flow and dispersion with 129Xe as the gas-phase sensor. The observed chemical shift allows the separate visualization of unrestricted xenon and xenon confined in the pores of the aerogel. The asymmetrical nature of the dispersion pattern alludes to the existence of a stationary and a flow regime in the aerogel. An exchange time constant is determined to characterize the gas transfer between them. As a general methodology, this technique provides insights into the dynamics of flow in porous media where several phases or chemical species may be present.
A 2D range Hausdorff approach to 3D facial recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin
2004-11-01
This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less
Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications.
Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun
2010-12-29
In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors' architecture on the basis of the type of electric measurement or imaging functionalities.
Data Access Based on a Guide Map of the Underwater Wireless Sensor Network
Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Cheng, Albert M. K.
2017-01-01
Underwater wireless sensor networks (UWSNs) represent an area of increasing research interest, as data storage, discovery, and query of UWSNs are always challenging issues. In this paper, a data access based on a guide map (DAGM) method is proposed for UWSNs. In DAGM, the metadata describes the abstracts of data content and the storage location. The center ring is composed of nodes according to the shortest average data query path in the network in order to store the metadata, and the data guide map organizes, diffuses and synchronizes the metadata in the center ring, providing the most time-saving and energy-efficient data query service for the user. For this method, firstly the data is stored in the UWSN. The storage node is determined, the data is transmitted from the sensor node (data generation source) to the storage node, and the metadata is generated for it. Then, the metadata is sent to the center ring node that is the nearest to the storage node and the data guide map organizes the metadata, diffusing and synchronizing it to the other center ring nodes. Finally, when there is query data in any user node, the data guide map will select a center ring node nearest to the user to process the query sentence, and based on the shortest transmission delay and lowest energy consumption, data transmission routing is generated according to the storage location abstract in the metadata. Hence, specific application data transmission from the storage node to the user is completed. The simulation results demonstrate that DAGM has advantages with respect to data access time and network energy consumption. PMID:29039757
Data Access Based on a Guide Map of the Underwater Wireless Sensor Network.
Wei, Zhengxian; Song, Min; Yin, Guisheng; Song, Houbing; Wang, Hongbin; Ma, Xuefei; Cheng, Albert M K
2017-10-17
Underwater wireless sensor networks (UWSNs) represent an area of increasing research interest, as data storage, discovery, and query of UWSNs are always challenging issues. In this paper, a data access based on a guide map (DAGM) method is proposed for UWSNs. In DAGM, the metadata describes the abstracts of data content and the storage location. The center ring is composed of nodes according to the shortest average data query path in the network in order to store the metadata, and the data guide map organizes, diffuses and synchronizes the metadata in the center ring, providing the most time-saving and energy-efficient data query service for the user. For this method, firstly the data is stored in the UWSN. The storage node is determined, the data is transmitted from the sensor node (data generation source) to the storage node, and the metadata is generated for it. Then, the metadata is sent to the center ring node that is the nearest to the storage node and the data guide map organizes the metadata, diffusing and synchronizing it to the other center ring nodes. Finally, when there is query data in any user node, the data guide map will select a center ring node nearest to the user to process the query sentence, and based on the shortest transmission delay and lowest energy consumption, data transmission routing is generated according to the storage location abstract in the metadata. Hence, specific application data transmission from the storage node to the user is completed. The simulation results demonstrate that DAGM has advantages with respect to data access time and network energy consumption.
A 100 Mfps image sensor for biological applications
NASA Astrophysics Data System (ADS)
Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo
2018-02-01
Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.
Smart image sensors: an emerging key technology for advanced optical measurement and microsystems
NASA Astrophysics Data System (ADS)
Seitz, Peter
1996-08-01
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
Testing and evaluation of tactical electro-optical sensors
NASA Astrophysics Data System (ADS)
Middlebrook, Christopher T.; Smith, John G.
2002-07-01
As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.
Multiple products management system with sensors array in automated storage and retrieval systems
NASA Astrophysics Data System (ADS)
Vongbunyong, Supachai; Roengritronnachai, Perawat; Kongsanit, Savanut; Chanok-owat, Chawisa; Polchankajorn, Pongsakorn
2018-01-01
Automated Storage and Retrieval Systems (AS/RS) have now been widely used in a number of industries due to its capability to automatically manage the storage of products in effective ways. One of the key features of AS/RS is that each rack is not assigned for a specific product resulting in the benefit of space utilization and logistics related issues. In this research, sensor arrays are equipped at each rack in order to enhance this feature. As a result, various products can be identified and mixed in each rack, so that the space utilization efficiency can be increased. To prove the concept, a prototype system consisting of a Cartesian robot that manages the storage and retrieval of products with 9 variations based on size and color. The concept of Cyber-Physical System and self-awareness of the system are also implemented in this concept prototype.
CSRQ: Communication-Efficient Secure Range Queries in Two-Tiered Sensor Networks
Dai, Hua; Ye, Qingqun; Yang, Geng; Xu, Jia; He, Ruiliang
2016-01-01
In recent years, we have seen many applications of secure query in two-tiered wireless sensor networks. Storage nodes are responsible for storing data from nearby sensor nodes and answering queries from Sink. It is critical to protect data security from a compromised storage node. In this paper, the Communication-efficient Secure Range Query (CSRQ)—a privacy and integrity preserving range query protocol—is proposed to prevent attackers from gaining information of both data collected by sensor nodes and queries issued by Sink. To preserve privacy and integrity, in addition to employing the encoding mechanisms, a novel data structure called encrypted constraint chain is proposed, which embeds the information of integrity verification. Sink can use this encrypted constraint chain to verify the query result. The performance evaluation shows that CSRQ has lower communication cost than the current range query protocols. PMID:26907293
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Maksudul M.; Sampathkumaran, Uma
The present invention relates to a modular chemiresistive sensor. In particular, a modular chemiresistive sensor for hypergolic fuel and oxidizer leak detection, carbon dioxide monitoring and detection of disease biomarkers. The sensor preferably has two gold or platinum electrodes mounted on a silicon substrate where the electrodes are connected to a power source and are separated by a gap of 0.5 to 4.0 .mu.M. A polymer nanowire or carbon nanotube spans the gap between the electrodes and connects the electrodes electrically. The electrodes are further connected to a circuit board having a processor and data storage, where the processor canmore » measure current and voltage values between the electrodes and compare the current and voltage values with current and voltage values stored in the data storage and assigned to particular concentrations of a pre-determined substance such as those listed above or a variety of other substances.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, S.; Gibson, J.
1995-02-01
In 1992, a sinkhole was discovered above a Strategic Petroleum Reserve storage facility at Weeks Island, Louisiana. The oil is stored in an old salt mine located within a salt dome. In order to assess the hydrologic significance of the sink hole, an In Situ Permeable Flow Sensor was deployed within a sand-filled conduit in the salt dome directly beneath the sinkhole. The flow sensor is a recently developed instrument which uses a thermal perturbation technique to measure the magnitude and direction of the full 3-dimensional groundwater flow velocity vector in saturated, permeable materials. The flow sensor measured substantial groundwatermore » flow directed vertically downward into the salt dome. The data obtained with the flow sensor provided critical evidence which was instrumental in assessing the significance of the sinkhole in terms of the integrity of the oil storage facility.« less
NASA Astrophysics Data System (ADS)
Hayami, Hajime; Takehara, Hiroaki; Nagata, Kengo; Haruta, Makito; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Ohta, Jun
2016-04-01
Intra body communication technology allows the fabrication of compact implantable biomedical sensors compared with RF wireless technology. In this paper, we report the fabrication of an implantable image sensor of 625 µm width and 830 µm length and the demonstration of wireless image-data transmission through a brain tissue of a living mouse. The sensor was designed to transmit output signals of pixel values by pulse width modulation (PWM). The PWM signals from the sensor transmitted through a brain tissue were detected by a receiver electrode. Wireless data transmission of a two-dimensional image was successfully demonstrated in a living mouse brain. The technique reported here is expected to provide useful methods of data transmission using micro sized implantable biomedical sensors.
Energy Options for Wireless Sensor Nodes.
Knight, Chris; Davidson, Joshua; Behrens, Sam
2008-12-08
Reduction in size and power consumption of consumer electronics has opened up many opportunities for low power wireless sensor networks. One of the major challenges is in supporting battery operated devices as the number of nodes in a network grows. The two main alternatives are to utilize higher energy density sources of stored energy, or to generate power at the node from local forms of energy. This paper reviews the state-of-the art technology in the field of both energy storage and energy harvesting for sensor nodes. The options discussed for energy storage include batteries, capacitors, fuel cells, heat engines and betavoltaic systems. The field of energy harvesting is discussed with reference to photovoltaics, temperature gradients, fluid flow, pressure variations and vibration harvesting.
Energy Options for Wireless Sensor Nodes
Knight, Chris; Davidson, Joshua; Behrens, Sam
2008-01-01
Reduction in size and power consumption of consumer electronics has opened up many opportunities for low power wireless sensor networks. One of the major challenges is in supporting battery operated devices as the number of nodes in a network grows. The two main alternatives are to utilize higher energy density sources of stored energy, or to generate power at the node from local forms of energy. This paper reviews the state-of-the art technology in the field of both energy storage and energy harvesting for sensor nodes. The options discussed for energy storage include batteries, capacitors, fuel cells, heat engines and betavoltaic systems. The field of energy harvesting is discussed with reference to photovoltaics, temperature gradients, fluid flow, pressure variations and vibration harvesting. PMID:27873975
Sensors and devices containing ultra-small nanowire arrays
Xiao, Zhili
2014-09-23
A network of nanowires may be used for a sensor. The nanowires are metallic, each nanowire has a thickness of at most 20 nm, and each nanowire has a width of at most 20 nm. The sensor may include nanowires comprising Pd, and the sensor may sense a change in hydrogen concentration from 0 to 100%. A device may include the hydrogen sensor, such as a vehicle, a fuel cell, a hydrogen storage tank, a facility for manufacturing steel, or a facility for refining petroleum products.
Sensors and devices containing ultra-small nanowire arrays
Xiao, Zhili
2017-04-11
A network of nanowires may be used for a sensor. The nanowires are metallic, each nanowire has a thickness of at most 20 nm, and each nanowire has a width of at most 20 nm. The sensor may include nanowires comprising Pd, and the sensor may sense a change in hydrogen concentration from 0 to 100%. A device may include the hydrogen sensor, such as a vehicle, a fuel cell, a hydrogen storage tank, a facility for manufacturing steel, or a facility for refining petroleum products.
Imaging standards for smart cards
NASA Astrophysics Data System (ADS)
Ellson, Richard N.; Ray, Lawrence A.
1996-02-01
"Smart cards" are plastic cards the size of credit cards which contain integrated circuits for the storage of digital information. The applications of these cards for image storage has been growing as card data capacities have moved from tens of bytes to thousands of bytes. This has prompted the recommendation of standards by the X3B10 committee of ANSI for inclusion in ISO standards for card image storage of a variety of image data types including digitized signatures and color portrait images. This paper will review imaging requirements of the smart card industry, challenges of image storage for small memory devices, card image communications, and the present status of standards. The paper will conclude with recommendations for the evolution of smart card image standards towards image formats customized to the image content and more optimized for smart card memory constraints.
Imaging standards for smart cards
NASA Astrophysics Data System (ADS)
Ellson, Richard N.; Ray, Lawrence A.
1996-01-01
'Smart cards' are plastic cards the size of credit cards which contain integrated circuits for the storage of digital information. The applications of these cards for image storage has been growing as card data capacities have moved from tens of bytes to thousands of bytes. This has prompted the recommendation of standards by the X3B10 committee of ANSI for inclusion in ISO standards for card image storage of a variety of image data types including digitized signatures and color portrait images. This paper reviews imaging requirements of the smart card industry, challenges of image storage for small memory devices, card image communications, and the present status of standards. The paper concludes with recommendations for the evolution of smart card image standards towards image formats customized to the image content and more optimized for smart card memory constraints.
Chander, G.; Scaramuzza, P.L.
2006-01-01
Increasingly, data from multiple sensors are used to gain a more complete understanding of land surface processes at a variety of scales. The Landsat suite of satellites has collected the longest continuous archive of multispectral data. The ResourceSat-1 Satellite (also called as IRS-P6) was launched into the polar sunsynchronous orbit on Oct 17, 2003. It carries three remote sensing sensors: the High Resolution Linear Imaging Self-Scanner (LISS-IV), Medium Resolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide Field Sensor (AWiFS). These three sensors are used together to provide images with different resolution and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to the Landsat-5 TM and Landsat-7 ETM+ sensors. The approach involved the calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors.
A Review of Significant Advances in Neutron Imaging from Conception to the Present
NASA Astrophysics Data System (ADS)
Brenizer, J. S.
This review summarizes the history of neutron imaging with a focus on the significant events and technical advancements in neutron imaging methods, from the first radiograph to more recent imaging methods. A timeline is presented to illustrate the key accomplishments that advanced the neutron imaging technique. Only three years after the discovery of the neutron by English physicist James Chadwick in 1932, neutron imaging began with the work of Hartmut Kallmann and Ernst Kuhn in Berlin, Germany, from 1935-1944. Kallmann and Kuhn were awarded a joint US Patent issued in January 1940. Little progress was made until the mid-1950's when Thewlis utilized a neutron beam from the BEPO reactor at Harwell, marking the beginning of the application of neutron imaging to practical applications. As the film method was improved, imaging moved from a qualitative to a quantitative technique, with applications in industry and in nuclear fuels. Standards were developed to aid in the quantification of the neutron images and the facility's capabilities. The introduction of dynamic neutron imaging (initially called real-time neutron radiography and neutron television) in the late 1970's opened the door to new opportunities and new challenges. As the electronic imaging matured, the introduction of the CCD imaging devices and solid-state light intensifiers helped address some of these challenges. Development of improved imaging devices for the medical community has had a major impact on neutron imaging. Additionally, amorphous silicon sensors provided improvements in temporal resolution, while providing a reasonably large imaging area. The development of new neutron imaging sensors and the development of new neutron imaging techniques in the past decade has advanced the technique's ability to provide insight and understanding of problems that other non-destructive techniques could not provide. This rapid increase in capability and application would not have been possible without the advances in computer processing speed and increased memory storage. For example, images with enhanced contrast are created by using the reflection, refraction, diffraction and ultra small angle scattering interactions. It is somewhat ironic that, like the first development of neutron images, the technique remains limited by the availability of high-intensity neutron sources, both in the facility cost and portability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nygaard, Runar; Xiao, Hai; He, Xiaoming
Energy generation by use of fossil fuels produces large volumes of CO 2 and other greenhouse gases, whose accumulation in the atmosphere is widely seen as undesirable. CO 2 Capture followed by sequestration has been identified as the solution. Subsurface geologic formations offer a potential location for long-term storage of CO 2 because of their requisite size. Unfortunately, the inaccessibility and complexity of the subsurface, the wide range of scales of variability, and the coupled nonlinear processes, impose tremendous challenges to determine the transport and predict the fate of the stored CO 2. Among the various monitoring approaches, in situmore » down-hole monitoring of the various state parameters provides critical and direct data points that can be used to validate the models, optimize the injection, detect leakage and track the CO 2 plume. However, down-hole sensors that can withstand the harsh conditions and operate over decades of the project lifecycle remain unavailable. Given that the widespread of carbon capture and storage will be the necessity and reality in the future, fundamental and applied research is required to address the significant challenges and technological gaps in lack of long-term reliable down-hole sensors This project focused on the development and demonstration of a novel, low-cost, distributed, robust ceramic coaxial cable sensor platform for in situ down-hole monitoring of geologic CO 2 injection and storage with high spatial and temporal resolutions. The coaxial cable Fabry-Perot interferometer (CCFPI) has been studied as a general sensor platform for in situ, long-term, measurement of temperature, pressure and strain, which are critical to CO 2 injection and storage. A novel signal processing scheme has been developed and demonstrated for dense multiplexing of the sensors for low-cost distributed sensing with high spatial resolution. The developed temperature, pressure and strain sensors have been extensively tested under laboratory conditions that are similar to the downhole CO 2 storage environment, showing excellent capability for in situ monitoring the various parameters that are important to model, optimize the injection, detect leakage and track the CO 2 plume. In addition, the interactions between the sensor datum and the geological models have been investigated in details for the purposes of model validation, guiding sensor installation/placement, enhancement of model prediction capability and optimization of the injection processes. This project has resulted in the successful development of new ceramic coaxial cable based sensor systems that can monitor directly the changes in pressure, temperature, and strain caused by increased reservoir pressure and reduced reservoir temperature due to the supercritical CO 2 injection. Integrated with geological models, the sensors and measurement data can improve the possibility to identify plume movement and leakage in the cap rock and wells with higher precision and more accuracy. The low cost, ease of deployment, small size and dense multiplexing features of the new sensing technology will allow a large number of sensors to be deployed to address the objective to demonstrate that 99% of the CO 2 remains in the injection zone.« less
Proceedings of the Augmented VIsual Display (AVID) Research Workshop
NASA Technical Reports Server (NTRS)
Kaiser, Mary K. (Editor); Sweet, Barbara T. (Editor)
1993-01-01
The papers, abstracts, and presentations were presented at a three day workshop focused on sensor modeling and simulation, and image enhancement, processing, and fusion. The technical sessions emphasized how sensor technology can be used to create visual imagery adequate for aircraft control and operations. Participants from industry, government, and academic laboratories contributed to panels on Sensor Systems, Sensor Modeling, Sensor Fusion, Image Processing (Computer and Human Vision), and Image Evaluation and Metrics.
2015-11-05
AFRL-AFOSR-VA-TR-2015-0359 Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging Viktor Gruev...To) 02/15/2011 - 08/15/2015 4. TITLE AND SUBTITLE Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast...investigate alternative spectral imaging architectures based on my previous experience in this research area. I will develop nanowire polarization
Simulation of the hyperspectral data from multispectral data using Python programming language
NASA Astrophysics Data System (ADS)
Tiwari, Varun; Kumar, Vinay; Pandey, Kamal; Ranade, Rigved; Agarwal, Shefali
2016-04-01
Multispectral remote sensing (MRS) sensors have proved their potential in acquiring and retrieving information of Land Use Land (LULC) Cover features in the past few decades. These MRS sensor generally acquire data within limited broad spectral bands i.e. ranging from 3 to 10 number of bands. The limited number of bands and broad spectral bandwidth in MRS sensors becomes a limitation in detailed LULC studies as it is not capable of distinguishing spectrally similar LULC features. On the counterpart, fascinating detailed information available in hyperspectral (HRS) data is spectrally over determined and able to distinguish spectrally similar material of the earth surface. But presently the availability of HRS sensors is limited. This is because of the requirement of sensitive detectors and large storage capability, which makes the acquisition and processing cumbersome and exorbitant. So, there arises a need to utilize the available MRS data for detailed LULC studies. Spectral reconstruction approach is one of the technique used for simulating hyperspectral data from available multispectral data. In the present study, spectral reconstruction approach is utilized for the simulation of hyperspectral data using EO-1 ALI multispectral data. The technique is implemented using python programming language which is open source in nature and possess support for advanced imaging processing libraries and utilities. Over all 70 bands have been simulated and validated using visual interpretation, statistical and classification approach.
NASA Astrophysics Data System (ADS)
El-Saba, A. M.; Alam, M. S.; Surpanani, A.
2006-05-01
Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.
NASA Astrophysics Data System (ADS)
Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi
2006-02-01
Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.
NASA Astrophysics Data System (ADS)
Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.
2004-02-01
This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.
Research and Development Annual Report, 1992
NASA Technical Reports Server (NTRS)
1993-01-01
Issued as a companion to Johnson Space Center's Research and Technology Annual Report, which reports JSC accomplishments under NASA Research and Technology Operating Plan (RTOP) funding, this report describes 42 additional JSC projects that are funded through sources other than the RTOP. Emerging technologies in four major disciplines are summarized: space systems technology, medical and life sciences, mission operations, and computer systems. Although these projects focus on support of human spacecraft design, development, and safety, most have wide civil and commercial applications in areas such as advanced materials, superconductors, advanced semiconductors, digital imaging, high density data storage, high performance computers, optoelectronics, artificial intelligence, robotics and automation, sensors, biotechnology, medical devices and diagnosis, and human factors engineering.
The JSC Research and Development Annual Report 1993
NASA Technical Reports Server (NTRS)
1994-01-01
Issued as a companion to Johnson Space Center's Research and Technology Annual Report, which reports JSC accomplishments under NASA Research and Technology Operating Plan (RTOP) funding, this report describes 47 additional projects that are funded through sources other than the RTOP. Emerging technologies in four major disciplines are summarized: space systems technology, medical and life sciences, mission operations, and computer systems. Although these projects focus on support of human spacecraft design, development, and safety, most have wide civil and commercial applications in areas such as advanced materials, superconductors, advanced semiconductors, digital imaging, high density data storage, high performance computers, optoelectronics, artificial intelligence, robotics and automation, sensors, biotechnology, medical devices and diagnosis, and human factors engineering.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Image Sensors Enhance Camera Technologies
NASA Technical Reports Server (NTRS)
2010-01-01
In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.
A design of driving circuit for star sensor imaging camera
NASA Astrophysics Data System (ADS)
Li, Da-wei; Yang, Xiao-xu; Han, Jun-feng; Liu, Zhao-hui
2016-01-01
The star sensor is a high-precision attitude sensitive measuring instruments, which determine spacecraft attitude by detecting different positions on the celestial sphere. Imaging camera is an important portion of star sensor. The purpose of this study is to design a driving circuit based on Kodak CCD sensor. The design of driving circuit based on Kodak KAI-04022 is discussed, and the timing of this CCD sensor is analyzed. By the driving circuit testing laboratory and imaging experiments, it is found that the driving circuits can meet the requirements of Kodak CCD sensor.
A New Digital Imaging and Analysis System for Plant and Ecosystem Phenological Studies
NASA Astrophysics Data System (ADS)
Ramirez, G.; Ramirez, G. A.; Vargas, S. A., Jr.; Luna, N. R.; Tweedie, C. E.
2015-12-01
Over the past decade, environmental scientists have increasingly used low-cost sensors and custom software to gather and analyze environmental data. Included in this trend has been the use of imagery from field-mounted static digital cameras. Published literature has highlighted the challenge scientists have encountered with poor and problematic camera performance and power consumption, limited data download and wireless communication options, general ruggedness of off the shelf camera solutions, and time consuming and hard-to-reproduce digital image analysis options. Data loggers and sensors are typically limited to data storage in situ (requiring manual downloading) and/or expensive data streaming options. Here we highlight the features and functionality of a newly invented camera/data logger system and coupled image analysis software suited to plant and ecosystem phenological studies (patent pending). The camera has resulted from several years of development and prototype testing supported by several grants funded by the US NSF. These inventions have several unique features and functionality and have been field tested in desert, arctic, and tropical rainforest ecosystems. The system can be used to acquire imagery/data from static and mobile platforms. Data is collected, preprocessed, and streamed to the cloud without the need of an external computer and can run for extended time periods. The camera module is capable of acquiring RGB, IR, and thermal (LWIR) data and storing it in a variety of formats including RAW. The system is full customizable with a wide variety of passive and smart sensors. The camera can be triggered by state conditions detected by sensors and/or select time intervals. The device includes USB, Wi-Fi, Bluetooth, serial, GSM, Ethernet, and Iridium connections and can be connected to commercial cloud servers such as Dropbox. The complementary image analysis software is compatible with all popular operating systems. Imagery can be viewed and analyzed in RGB, HSV, and l*a*b color space. Users can select a spectral index, which have been derived from published literature and/or choose to have analytical output reported as separate channel strengths for a given color space. Results of the analysis can be viewed in a plot and/or saved as a .csv file for additional analysis and visualization.
The new Seafloor Observatory (OBSEA) for remote and long-term coastal ecosystem monitoring.
Aguzzi, Jacopo; Mànuel, Antoni; Condal, Fernando; Guillén, Jorge; Nogueras, Marc; del Rio, Joaquin; Costa, Corrado; Menesatti, Paolo; Puig, Pere; Sardà, Francesc; Toma, Daniel; Palanques, Albert
2011-01-01
A suitable sampling technology to identify species and to estimate population dynamics based on individual counts at different temporal levels in relation to habitat variations is increasingly important for fishery management and biodiversity studies. In the past two decades, as interest in exploring the oceans for valuable resources and in protecting these resources from overexploitation have grown, the number of cabled (permanent) submarine multiparametric platforms with video stations has increased. Prior to the development of seafloor observatories, the majority of autonomous stations were battery powered and stored data locally. The recently installed low-cost, multiparametric, expandable, cabled coastal Seafloor Observatory (OBSEA), located 4 km off of Vilanova i la Gertrú, Barcelona, at a depth of 20 m, is directly connected to a ground station by a telecommunication cable; thus, it is not affected by the limitations associated with previous observation technologies. OBSEA is part of the European Multidisciplinary Seafloor Observatory (EMSO) infrastructure, and its activities are included among the Network of Excellence of the European Seas Observatory NETwork (ESONET). OBSEA enables remote, long-term, and continuous surveys of the local ecosystem by acquiring synchronous multiparametric habitat data and bio-data with the following sensors: Conductivity-Temperature-Depth (CTD) sensors for salinity, temperature, and pressure; Acoustic Doppler Current Profilers (ADCP) for current speed and direction, including a turbidity meter and a fluorometer (for the determination of chlorophyll concentration); a hydrophone; a seismometer; and finally, a video camera for automated image analysis in relation to species classification and tracking. Images can be monitored in real time, and all data can be stored for future studies. In this article, the various components of OBSEA are described, including its hardware (the sensors and the network of marine and land nodes), software (data acquisition, transmission, processing, and storage), and multiparametric measurement (habitat and bio-data time series) capabilities. A one-month multiparametric survey of habitat parameters was conducted during 2009 and 2010 to demonstrate these functions. An automated video image analysis protocol was also developed for fish counting in the water column, a method that can be used with cabled coastal observatories working with still images. Finally, bio-data time series were coupled with data from other oceanographic sensors to demonstrate the utility of OBSEA in studies of ecosystem dynamics.
The New Seafloor Observatory (OBSEA) for Remote and Long-Term Coastal Ecosystem Monitoring
Aguzzi, Jacopo; Mànuel, Antoni; Condal, Fernando; Guillén, Jorge; Nogueras, Marc; del Rio, Joaquin; Costa, Corrado; Menesatti, Paolo; Puig, Pere; Sardà, Francesc; Toma, Daniel; Palanques, Albert
2011-01-01
A suitable sampling technology to identify species and to estimate population dynamics based on individual counts at different temporal levels in relation to habitat variations is increasingly important for fishery management and biodiversity studies. In the past two decades, as interest in exploring the oceans for valuable resources and in protecting these resources from overexploitation have grown, the number of cabled (permanent) submarine multiparametric platforms with video stations has increased. Prior to the development of seafloor observatories, the majority of autonomous stations were battery powered and stored data locally. The recently installed low-cost, multiparametric, expandable, cabled coastal Seafloor Observatory (OBSEA), located 4 km off of Vilanova i la Gertrú, Barcelona, at a depth of 20 m, is directly connected to a ground station by a telecommunication cable; thus, it is not affected by the limitations associated with previous observation technologies. OBSEA is part of the European Multidisciplinary Seafloor Observatory (EMSO) infrastructure, and its activities are included among the Network of Excellence of the European Seas Observatory NETwork (ESONET). OBSEA enables remote, long-term, and continuous surveys of the local ecosystem by acquiring synchronous multiparametric habitat data and bio-data with the following sensors: Conductivity-Temperature-Depth (CTD) sensors for salinity, temperature, and pressure; Acoustic Doppler Current Profilers (ADCP) for current speed and direction, including a turbidity meter and a fluorometer (for the determination of chlorophyll concentration); a hydrophone; a seismometer; and finally, a video camera for automated image analysis in relation to species classification and tracking. Images can be monitored in real time, and all data can be stored for future studies. In this article, the various components of OBSEA are described, including its hardware (the sensors and the network of marine and land nodes), software (data acquisition, transmission, processing, and storage), and multiparametric measurement (habitat and bio-data time series) capabilities. A one-month multiparametric survey of habitat parameters was conducted during 2009 and 2010 to demonstrate these functions. An automated video image analysis protocol was also developed for fish counting in the water column, a method that can be used with cabled coastal observatories working with still images. Finally, bio-data time series were coupled with data from other oceanographic sensors to demonstrate the utility of OBSEA in studies of ecosystem dynamics. PMID:22163931
Multispectral image-fused head-tracked vision system (HTVS) for driving applications
NASA Astrophysics Data System (ADS)
Reese, Colin E.; Bender, Edward J.
2001-08-01
Current military thermal driver vision systems consist of a single Long Wave Infrared (LWIR) sensor mounted on a manually operated gimbal, which is normally locked forward during driving. The sensor video imagery is presented on a large area flat panel display for direct view. The Night Vision and Electronics Sensors Directorate and Kaiser Electronics are cooperatively working to develop a driver's Head Tracked Vision System (HTVS) which directs dual waveband sensors in a more natural head-slewed imaging mode. The HTVS consists of LWIR and image intensified sensors, a high-speed gimbal, a head mounted display, and a head tracker. The first prototype systems have been delivered and have undergone preliminary field trials to characterize the operational benefits of a head tracked sensor system for tactical military ground applications. This investigation will address the advantages of head tracked vs. fixed sensor systems regarding peripheral sightings of threats, road hazards, and nearby vehicles. An additional thrust will investigate the degree to which additive (A+B) fusion of LWIR and image intensified sensors enhances overall driving performance. Typically, LWIR sensors are better for detecting threats, while image intensified sensors provide more natural scene cues, such as shadows and texture. This investigation will examine the degree to which the fusion of these two sensors enhances the driver's overall situational awareness.
Cryogenic Multichannel Pressure Sensor With Electronic Scanning
NASA Technical Reports Server (NTRS)
Hopson, Purnell, Jr.; Chapman, John J.; Kruse, Nancy M. H.
1994-01-01
Array of pressure sensors operates reliably and repeatably over wide temperature range, extending from normal boiling point of water down to boiling point of nitrogen. Sensors accurate and repeat to within 0.1 percent. Operate for 12 months without need for recalibration. Array scanned electronically, sensor readings multiplexed and sent to desktop computer for processing and storage. Used to measure distributions of pressure in research on boundary layers at high Reynolds numbers, achieved by low temperatures.
Storage Phosphors for Medical Imaging
Leblans, Paul; Vandenbroucke, Dirk; Willems, Peter
2011-01-01
Computed radiography (CR) uses storage phosphor imaging plates for digital imaging. Absorbed X-ray energy is stored in crystal defects. In read-out the energy is set free as blue photons upon optical stimulation. In the 35 years of CR history, several storage phosphor families were investigated and developed. An explanation is given as to why some materials made it to the commercial stage, while others did not. The photo stimulated luminescence mechanism of the current commercial storage phosphors, BaFBr:Eu2+ and CsBr:Eu2+ is discussed. The relation between storage phosphor plate physical characteristics and image quality is explained. It is demonstrated that the morphology of the phosphor crystals in the CR imaging plate has a very significant impact on its performance. PMID:28879966
Leica ADS40 Sensor for Coastal Multispectral Imaging
NASA Technical Reports Server (NTRS)
Craig, John C.
2007-01-01
The Leica ADS40 Sensor as it is used for coastal multispectral imaging is presented. The contents include: 1) Project Area Overview; 2) Leica ADS40 Sensor; 3) Focal Plate Arrangements; 4) Trichroid Filter; 5) Gradient Correction; 6) Image Acquisition; 7) Remote Sensing and ADS40; 8) Band comparisons of Satellite and Airborne Sensors; 9) Impervious Surface Extraction; and 10) Impervious Surface Details.
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
Wave analysis of a plenoptic system and its applications
NASA Astrophysics Data System (ADS)
Shroff, Sapna A.; Berkner, Kathrin
2013-03-01
Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.
Detection of Obstacles in Monocular Image Sequences
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia
1997-01-01
The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.
Fish freshness detection by a computer screen photoassisted based gas sensor array.
Alimelli, Adriano; Pennazza, Giorgio; Santonico, Marco; Paolesse, Roberto; Filippini, Daniel; D'Amico, Arnaldo; Lundström, Ingemar; Di Natale, Corrado
2007-01-23
In the last years a large number of different measurement methodologies were applied to measure the freshness of fishes. Among them the connection between freshness and headspace composition has been considered by gas chromatographic analysis and from the last two decades by a number of sensors and biosensors aimed at measuring some characteristic indicators (usually amines). More recently also the so-called artificial olfaction systems gathering together many non-specific sensors have shown a certain capability to transduce the global composition of the fish headspace capturing the differences between fresh and spoiled products. One of the main objectives related to the introduction of sensor systems with respect to the analytical methods is the claimed possibility to distribute the freshness control since sensors are expected to be "portable" and "simple". In spite of these objectives, until now sensor systems did not result in any tool that may be broadly distributed. In this paper, we present a chemical sensor array where the optical features of layers of chemicals, sensitive to volatile compounds typical of spoilage processes in fish, are interrogated by a very simple platform based on a computer screen and a web cam. An array of metalloporphyrins is here used to classify fillets of thawed fishes according to their storage days and to monitor the spoilage in filleted anchovies for a time of 8 h. Results indicate a complete identification of the storage days of thawed fillets and a determination of the storage time of anchovies held at room temperature with a root mean square error of validation of about 30 min. The optical system produces a sort of spectral fingerprint containing information about both the absorbance and the emission of the sensitive layer. The system here illustrated, based on computer peripherals, can be easily scaled to any device endowed with a programmable screen and a camera such as cellular phones offering for the first time the possibility to fulfil the sensor expectation of diffused and efficient analytical capabilities.
Micijevic, Esad; Morfitt, Ron
2010-01-01
Systematic characterization and calibration of the Landsat sensors and the assessment of image data quality are performed using the Image Assessment System (IAS). The IAS was first introduced as an element of the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) ground segment and recently extended to Landsat 4 (L4) and 5 (L5) Thematic Mappers (TM) and Multispectral Sensors (MSS) on-board the Landsat 1-5 satellites. In preparation for the Landsat Data Continuity Mission (LDCM), the IAS was developed for the Earth Observer 1 (EO-1) Advanced Land Imager (ALI) with a capability to assess pushbroom sensors. This paper describes the LDCM version of the IAS and how it relates to unique calibration and validation attributes of its on-board imaging sensors. The LDCM IAS system will have to handle a significantly larger number of detectors and the associated database than the previous IAS versions. An additional challenge is that the LDCM IAS must handle data from two sensors, as the LDCM products will combine the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) spectral bands.
Ma, Xingpo; Liu, Xingjian; Liang, Junbin; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda
2018-03-15
A novel network paradigm of mobile edge computing, namely TMWSNs (two-tiered mobile wireless sensor networks), has just been proposed by researchers in recent years for its high scalability and robustness. However, only a few works have considered the security of TMWSNs. In fact, the storage nodes, which are located at the upper layer of TMWSNs, are prone to being attacked by the adversaries because they play a key role in bridging both the sensor nodes and the sink, which may lead to the disclosure of all data stored on them as well as some other potentially devastating results. In this paper, we make a comparative study on two typical schemes, EVTopk and VTMSN, which have been proposed recently for securing Top- k queries in TMWSNs, through both theoretical analysis and extensive simulations, aiming at finding out their disadvantages and advancements. We find that both schemes unsatisfactorily raise communication costs. Specifically, the extra communication cost brought about by transmitting the proof information uses up more than 40% of the total communication cost between the sensor nodes and the storage nodes, and 80% of that between the storage nodes and the sink. We discuss the corresponding reasons and present our suggestions, hoping that it will inspire the researchers researching this subject.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
A time-resolved image sensor for tubeless streak cameras
NASA Astrophysics Data System (ADS)
Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji
2014-03-01
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
Chen, Chia-Wei; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung
2017-10-02
Recently even the low-end mobile-phones are equipped with a high-resolution complementary-metal-oxide-semiconductor (CMOS) image sensor. This motivates using a CMOS image sensor for visible light communication (VLC). Here we propose and demonstrate an efficient demodulation scheme to synchronize and demodulate the rolling shutter pattern in image sensor based VLC. The implementation algorithm is discussed. The bit-error-rate (BER) performance and processing latency are evaluated and compared with other thresholding schemes.
Mathematical models and photogrammetric exploitation of image sensing
NASA Astrophysics Data System (ADS)
Puatanachokchai, Chokchai
Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.
NASA Astrophysics Data System (ADS)
Burk, Laurel M.; Lee, Yueh Z.; Wait, J. Matthew; Lu, Jianping; Zhou, Otto Z.
2012-09-01
A cone beam micro-CT has previously been utilized along with a pressure-tracking respiration sensor to acquire prospectively gated images of both wild-type mice and various adult murine disease models. While the pressure applied to the abdomen of the subject by this sensor is small and is generally without physiological effect, certain disease models of interest, as well as very young animals, are prone to atelectasis with added pressure, or they generate too weak a respiration signal with this method to achieve optimal prospective gating. In this work we present a new fibre-optic displacement sensor which monitors respiratory motion of a subject without requiring physical contact. The sensor outputs an analogue signal which can be used for prospective respiration gating in micro-CT imaging. The device was characterized and compared against a pneumatic air chamber pressure sensor for the imaging of adult wild-type mice. The resulting images were found to be of similar quality with respect to physiological motion blur; the quality of the respiration signal trace obtained using the non-contact sensor was comparable to that of the pressure sensor and was superior for gating purposes due to its better signal-to-noise ratio. The non-contact sensor was then used to acquire in-vivo micro-CT images of a murine model for congenital diaphragmatic hernia and of 11-day-old mouse pups. In both cases, quality CT images were successfully acquired using this new respiration sensor. Despite the presence of beam hardening artefacts arising from the presence of a fibre-optic cable in the imaging field, we believe this new technique for respiration monitoring and gating presents an opportunity for in-vivo imaging of disease models which were previously considered too delicate for established animal handling methods.
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
On computer vision in wireless sensor networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Nina M.; Ko, Teresa H.
Wireless sensor networks allow detailed sensing of otherwise unknown and inaccessible environments. While it would be beneficial to include cameras in a wireless sensor network because images are so rich in information, the power cost of transmitting an image across the wireless network can dramatically shorten the lifespan of the sensor nodes. This paper describe a new paradigm for the incorporation of imaging into wireless networks. Rather than focusing on transmitting images across the network, we show how an image can be processed locally for key features using simple detectors. Contrasted with traditional event detection systems that trigger an imagemore » capture, this enables a new class of sensors which uses a low power imaging sensor to detect a variety of visual cues. Sharing these features among relevant nodes cues specific actions to better provide information about the environment. We report on various existing techniques developed for traditional computer vision research which can aid in this work.« less
Image-Based Environmental Monitoring Sensor Application Using an Embedded Wireless Sensor Network
Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh
2014-01-01
This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Jacinto Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions. PMID:25171121
Image-based environmental monitoring sensor application using an embedded wireless sensor network.
Paek, Jeongyeup; Hicks, John; Coe, Sharon; Govindan, Ramesh
2014-08-28
This article discusses the experiences from the development and deployment of two image-based environmental monitoring sensor applications using an embedded wireless sensor network. Our system uses low-power image sensors and the Tenet general purpose sensing system for tiered embedded wireless sensor networks. It leverages Tenet's built-in support for reliable delivery of high rate sensing data, scalability and its flexible scripting language, which enables mote-side image compression and the ease of deployment. Our first deployment of a pitfall trap monitoring application at the James San Cannot Mountain Reserve provided us with insights and lessons learned into the deployment of and compression schemes for these embedded wireless imaging systems. Our three month-long deployment of a bird nest monitoring application resulted in over 100,000 images collected from a 19-camera node network deployed over an area of 0.05 square miles, despite highly variable environmental conditions. Our biologists found the on-line, near-real-time access to images to be useful for obtaining data on answering their biological questions.
Microscale autonomous sensor and communications module
Okandan, Murat; Nielson, Gregory N
2014-03-25
Various technologies pertaining to a microscale autonomous sensor and communications module are described herein. Such a module includes a sensor that generates a sensor signal that is indicative of an environmental parameter. An integrated circuit receives the sensor signal and generates an output signal based at least in part upon the sensor signal. An optical emitter receives the output signal and generates an optical signal as a function of the output signal. An energy storage device is configured to provide power to at least the integrated circuit and the optical emitter, and wherein the module has a relatively small diameter and thickness.
York, Timothy; Powell, Samuel B.; Gao, Shengkui; Kahan, Lindsey; Charanya, Tauseef; Saha, Debajit; Roberts, Nicholas W.; Cronin, Thomas W.; Marshall, Justin; Achilefu, Samuel; Lake, Spencer P.; Raman, Baranidharan; Gruev, Viktor
2015-01-01
In this paper, we present recent work on bioinspired polarization imaging sensors and their applications in biomedicine. In particular, we focus on three different aspects of these sensors. First, we describe the electro–optical challenges in realizing a bioinspired polarization imager, and in particular, we provide a detailed description of a recent low-power complementary metal–oxide–semiconductor (CMOS) polarization imager. Second, we focus on signal processing algorithms tailored for this new class of bioinspired polarization imaging sensors, such as calibration and interpolation. Third, the emergence of these sensors has enabled rapid progress in characterizing polarization signals and environmental parameters in nature, as well as several biomedical areas, such as label-free optical neural recording, dynamic tissue strength analysis, and early diagnosis of flat cancerous lesions in a murine colorectal tumor model. We highlight results obtained from these three areas and discuss future applications for these sensors. PMID:26538682
Pawlak, Ryszard; Lebioda, Marcin; Rymaszewski, Jacek; Szymanski, Witold; Kolodziejczyk, Lukasz; Kula, Piotr
2016-01-01
Low-temperature electronics operating in below zero temperatures or even below the lower limit of the common −65 to 125 °C temperature range are essential in medical diagnostics, in space exploration and aviation, in processing and storage of food and mainly in scientific research, like superconducting materials engineering and their applications—superconducting magnets, superconducting energy storage, and magnetic levitation systems. Such electronic devices demand special approach to the materials used in passive elements and sensors. The main goal of this work was the implementation of a fully transparent, flexible cryogenic temperature sensor with graphene structures as sensing element. Electrodes were made of transparent ITO (Indium Tin Oxide) or ITO/Ag/ITO conductive layers by laser ablation and finally encapsulated in a polymer coating. A helium closed-cycle cryostat has been used in measurements of the electrical properties of these graphene-based temperature sensors under cryogenic conditions. The sensors were repeatedly cooled from room temperature to cryogenic temperature. Graphene structures were characterized using Raman spectroscopy. The observation of the resistance changes as a function of temperature indicates the potential use of graphene layers in the construction of temperature sensors. The temperature characteristics of the analyzed graphene sensors exhibit no clear anomalies or strong non-linearity in the entire studied temperature range (as compared to the typical carbon sensor). PMID:28036036
Pawlak, Ryszard; Lebioda, Marcin; Rymaszewski, Jacek; Szymanski, Witold; Kolodziejczyk, Lukasz; Kula, Piotr
2016-12-28
Low-temperature electronics operating in below zero temperatures or even below the lower limit of the common -65 to 125 °C temperature range are essential in medical diagnostics, in space exploration and aviation, in processing and storage of food and mainly in scientific research, like superconducting materials engineering and their applications-superconducting magnets, superconducting energy storage, and magnetic levitation systems. Such electronic devices demand special approach to the materials used in passive elements and sensors. The main goal of this work was the implementation of a fully transparent, flexible cryogenic temperature sensor with graphene structures as sensing element. Electrodes were made of transparent ITO (Indium Tin Oxide) or ITO/Ag/ITO conductive layers by laser ablation and finally encapsulated in a polymer coating. A helium closed-cycle cryostat has been used in measurements of the electrical properties of these graphene-based temperature sensors under cryogenic conditions. The sensors were repeatedly cooled from room temperature to cryogenic temperature. Graphene structures were characterized using Raman spectroscopy. The observation of the resistance changes as a function of temperature indicates the potential use of graphene layers in the construction of temperature sensors. The temperature characteristics of the analyzed graphene sensors exhibit no clear anomalies or strong non-linearity in the entire studied temperature range (as compared to the typical carbon sensor).
NASA Astrophysics Data System (ADS)
Courts, S. Scott; Krause, John
2012-06-01
Cryogenic temperature sensors used in aerospace applications are typically procured far in advance of the mission launch date. Depending upon the program, the temperature sensors may be stored at room temperature for extended periods as installation and groundbased testing can take years before the actual flight. The effects of long term storage at room temperature are sometimes approximated by the use of accelerated aging at temperatures well above room temperature, but this practice can yield invalid results as the sensing material and/or electrical contacting method can be increasingly unstable with higher temperature exposure. To date, little data are available on the effects of extended room temperature aging on sensors commonly used in aerospace applications. This research examines two such temperature sensors models - the Lake Shore Cryotronics, Inc. model CernoxTM and DT-670-SD temperature sensors. Sample groups of each model type have been maintained for ten years or longer with room temperature storage between calibrations. Over an eighteen year period, the CernoxTM temperature sensors exhibited a stability of better than ±20 mK for T<30 K and better than ±0.1% of temperature for T>30 K. Over a ten year period the model DT-670-SD sensors exhibited a stability of better than ±140 mK for T<25 K and better than ±75 mK for T>25 K.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
NASA Astrophysics Data System (ADS)
Goss, Tristan M.
2016-05-01
With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.
Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter
Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao
2015-01-01
As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903
Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor
NASA Astrophysics Data System (ADS)
Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji
2006-02-01
We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.
The challenge of sCMOS image sensor technology to EMCCD
NASA Astrophysics Data System (ADS)
Chang, Weijing; Dai, Fang; Na, Qiyue
2018-02-01
In the field of low illumination image sensor, the noise of the latest scientific-grade CMOS image sensor is close to EMCCD, and the industry thinks it has the potential to compete and even replace EMCCD. Therefore we selected several typical sCMOS and EMCCD image sensors and cameras to compare their performance parameters. The results show that the signal-to-noise ratio of sCMOS is close to EMCCD, and the other parameters are superior. But signal-to-noise ratio is very important for low illumination imaging, and the actual imaging results of sCMOS is not ideal. EMCCD is still the first choice in the high-performance application field.
NASA Astrophysics Data System (ADS)
Liang, Shiguo; Ye, Jiamin; Wang, Haigang; Wu, Meng; Yang, Wuqiang
2018-03-01
In the design of electrical capacitance tomography (ECT) sensors, the internal wall thickness can vary with specific applications, and it is a key factor that influences the sensitivity distribution and image quality. This paper will discuss the effect of the wall thickness of ECT sensors on image quality. Three flow patterns are simulated for wall thicknesses of 2.5 mm to 15 mm on eight-electrode ECT sensors. The sensitivity distributions and potential distributions are compared for different wall thicknesses. Linear back-projection and Landweber iteration algorithms are used for image reconstruction. Relative image error and correlation coefficients are used for image evaluation using both simulation and experimental data.
An analysis of image storage systems for scalable training of deep neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Young, Steven R; Patton, Robert M
This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less
Evaluation of physical properties of different digital intraoral sensors.
Al-Rawi, Wisam; Teich, Sorin
2013-09-01
Digital technologies provide clinically acceptable results comparable to traditional films while having other advantages such as the ability to store and manipulate images, immediate evaluation of the image diagnostic quality, possible reduction in patient radiation exposure, and so on. The purpose of this paper is to present the results of the evaluation of the physical design of eight CMOS digital intraoral sensors. Sensors tested included: XDR (Cyber Medical Imaging, Los Angeles, CA, USA), RVG 6100 (Carestream Dental LLC, Atlanta, GA, USA), Platinum (DEXIS LLC., Hatfield, PA, USA), CDR Elite (Schick Technologies, Long Island City, NY, USA), ProSensor (Planmeca, Helsinki, Finland), EVA (ImageWorks, Elmsford, NY, USA), XIOS Plus (Sirona, Bensheim, Germany), and GXS-700 (Gendex Dental Systems, Hatfield, PA, USA). The sensors were evaluated for cable configuration, connectivity interface, presence of back-scattering radiation shield, plate thickness, active sensor area, and comparing the active imaging area to the outside casing and to conventional radiographic films. There were variations among the physical design of different sensors. For most parameters tested, a lack of standardization exists in the industry. The results of this study revealed that these details are not always available through the material provided by the manufacturers and are often not advertised. For all sensor sizes, active imaging area was smaller compared with conventional films. There was no sensor in the group that had the best physical design. Data presented in this paper establishes a benchmark for comparing the physical design of digital intraoral sensors.
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
Photodiode area effect on performance of X-ray CMOS active pixel sensors
NASA Astrophysics Data System (ADS)
Kim, M. S.; Kim, Y.; Kim, G.; Lim, K. T.; Cho, G.; Kim, D.
2018-02-01
Compared to conventional TFT-based X-ray imaging devices, CMOS-based X-ray imaging sensors are considered next generation because they can be manufactured in very small pixel pitches and can acquire high-speed images. In addition, CMOS-based sensors have the advantage of integration of various functional circuits within the sensor. The image quality can also be improved by the high fill-factor in large pixels. If the size of the subject is small, the size of the pixel must be reduced as a consequence. In addition, the fill factor must be reduced to aggregate various functional circuits within the pixel. In this study, 3T-APS (active pixel sensor) with photodiodes of four different sizes were fabricated and evaluated. It is well known that a larger photodiode leads to improved overall performance. Nonetheless, if the size of the photodiode is > 1000 μm2, the degree to which the sensor performance increases as the photodiode size increases, is reduced. As a result, considering the fill factor, pixel-pitch > 32 μm is not necessary to achieve high-efficiency image quality. In addition, poor image quality is to be expected unless special sensor-design techniques are included for sensors with a pixel pitch of 25 μm or less.
Mass-storage management for distributed image/video archives
NASA Astrophysics Data System (ADS)
Franchi, Santina; Guarda, Roberto; Prampolini, Franco
1993-04-01
The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.
Methods of use for sensor based fluid detection devices
NASA Technical Reports Server (NTRS)
Lewis, Nathan S. (Inventor)
2001-01-01
Methods of use and devices for detecting analyte in fluid. A system for detecting an analyte in a fluid is described comprising a substrate having a sensor comprising a first organic material and a second organic material where the sensor has a response to permeation by an analyte. A detector is operatively associated with the sensor. Further, a fluid delivery appliance is operatively associated with the sensor. The sensor device has information storage and processing equipment, which is operably connected with the device. This device compares a response from the detector with a stored ideal response to detect the presence of analyte. An integrated system for detecting an analyte in a fluid is also described where the sensing device, detector, information storage and processing device, and fluid delivery device are incorporated in a substrate. Methods for use for the above system are also described where the first organic material and a second organic material are sensed and the analyte is detected with a detector operatively associated with the sensor. The method provides for a device, which delivers fluid to the sensor and measures the response of the sensor with the detector. Further, the response is compared to a stored ideal response for the analyte to determine the presence of the analyte. In different embodiments, the fluid measured may be a gaseous fluid, a liquid, or a fluid extracted from a solid. Methods of fluid delivery for each embodiment are accordingly provided.
Rizvi, Sanam Shahla; Chung, Tae-Sun
2010-01-01
Flash memory has become a more widespread storage medium for modern wireless devices because of its effective characteristics like non-volatility, small size, light weight, fast access speed, shock resistance, high reliability and low power consumption. Sensor nodes are highly resource constrained in terms of limited processing speed, runtime memory, persistent storage, communication bandwidth and finite energy. Therefore, for wireless sensor networks supporting sense, store, merge and send schemes, an efficient and reliable file system is highly required with consideration of sensor node constraints. In this paper, we propose a novel log structured external NAND flash memory based file system, called Proceeding to Intelligent service oriented memorY Allocation for flash based data centric Sensor devices in wireless sensor networks (PIYAS). This is the extended version of our previously proposed PIYA [1]. The main goals of the PIYAS scheme are to achieve instant mounting and reduced SRAM space by keeping memory mapping information to a very low size of and to provide high query response throughput by allocation of memory to the sensor data by network business rules. The scheme intelligently samples and stores the raw data and provides high in-network data availability by keeping the aggregate data for a longer period of time than any other scheme has done before. We propose effective garbage collection and wear-leveling schemes as well. The experimental results show that PIYAS is an optimized memory management scheme allowing high performance for wireless sensor networks.
Primary response of high-aspect-ratio thermoresistive sensors
NASA Astrophysics Data System (ADS)
Majlesein, H. R.; Mitchell, D. L.; Bhattacharya, Pradeep K.; Singh, A.; Anderson, James A.
1997-07-01
There is a growing need for sensors in monitoring performance in modern quality products such as in electronics to monitor heat build up, substrate delaminations, and thermal runaway. In processing instruments, intelligent sensors are needed to measure deposited layer thickness and resistivities for process control, and in environmental electrical enclosures, they are used for climate monitoring and control. A yaw sensor for skid prevention utilizes very fine moveable components, and an automobile engine controller blends a microprocessor and sensor on the same chip. An Active-Pixel Image Sensor is integrated with a digital readout circuit to perform most of the functions in a video camera. Magnetostrictive transducers sense and damp vibrations. Improved acoustic sensors will be used in flow detection of air and other fluids, even at subsonic speeds. Optoelectronic sensor systems are being developed for installation on rocket engines to monitor exhaust gases for signs of wear in the engines. With new freon-free coolants being available the problems of A/C system corrosion have gone up in automobiles and need to be monitored more frequently. Defense cutbacks compel the storage of hardware in safe-custody for an indeterminate period of time, and this makes monitoring more essential. Just-in-time customized manufacturing in modern industries also needs dramatic adjustment in productivity of various selected items, leaving some manufacturing equipment idle for a long time, and therefore, it will be prone to more corrosion, and corrosion sensors are needed. In the medical device industry, development of implantable medical devices using both potentiometric and amperometric determination of parameters has, until now, been used with insufficient micro miniaturization, and thus, requires surgical implantation. In many applications, high-aspect- ratio devices, made possible by the use of synchrotron radiation lithography, allow more useful devices to be produced. High-aspect-ratio sensors will permit industries and various other users to attain more accurate measurements of physical properties and chemical compositions in many systems. Considerable engineering research has recently been focused on this type of fabrication effect. This paper looks at a high-aspect-ratio sensor bus thermorestrictive device with increased aspect-ratio of the interconnects to the device, using unique simulation software resources.
High-Speed Binary-Output Image Sensor
NASA Technical Reports Server (NTRS)
Fossum, Eric; Panicacci, Roger A.; Kemeny, Sabrina E.; Jones, Peter D.
1996-01-01
Photodetector outputs digitized by circuitry on same integrated-circuit chip. Developmental special-purpose binary-output image sensor designed to capture up to 1,000 images per second, with resolution greater than 10 to the 6th power pixels per image. Lower-resolution but higher-frame-rate prototype of sensor contains 128 x 128 array of photodiodes on complementary metal oxide/semiconductor (CMOS) integrated-circuit chip. In application for which it is being developed, sensor used to examine helicopter oil to determine whether amount of metal and sand in oil sufficient to warrant replacement.
Precise calibration of pupil images in pyramid wavefront sensor.
Liu, Yong; Mu, Quanquan; Cao, Zhaoliang; Hu, Lifa; Yang, Chengliang; Xuan, Li
2017-04-20
The pyramid wavefront sensor (PWFS) is a novel wavefront sensor with several inspiring advantages compared with Shack-Hartmann wavefront sensors. The PWFS uses four pupil images to calculate the local tilt of the incoming wavefront. Pupil images are conjugated with a telescope pupil so that each pixel in the pupil image is diffraction-limited by the telescope pupil diameter, thus the sensing error of the PWFS is much lower than that of the Shack-Hartmann sensor and is related to the extraction and alignment accuracy of pupil images. However, precise extraction of these images is difficult to conduct in practice. Aiming at improving the sensing accuracy, we analyzed the physical model of calibration of a PWFS and put forward an extraction algorithm. The process was verified via a closed-loop correction experiment. The results showed that the sensing accuracy of the PWFS increased after applying the calibration and extraction method.
Advances in multi-sensor data fusion: algorithms and applications.
Dong, Jiang; Zhuang, Dafang; Huang, Yaohuan; Fu, Jingying
2009-01-01
With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of "algorithm fusion" methods; (3) Establishment of an automatic quality assessment scheme.
Camera sensor arrangement for crop/weed detection accuracy in agronomic images.
Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo
2013-04-02
In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.
Commercial CMOS image sensors as X-ray imagers and particle beam monitors
NASA Astrophysics Data System (ADS)
Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.
2015-01-01
CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.
Sensor Detects Overheating Of Perishable Material
NASA Technical Reports Server (NTRS)
Dordick, Jonathan S.; Klibanov, Alexander
1990-01-01
Experimental temperature sensor changes color rapidly and irreversibly when temperature rises above pre-determined level. Based on reactions of enzymes in paraffins, blended so mixture melts at temperature considered maximum safe value. Similar devices used to detect temperature abuse, whether foods or medicines refrigerated exposed to excessive temperatures during shipment and storage. By viewing sensor, receiving clerk tells immediately whether product maintained at safe temperatures and acceptable.
Apparatus and method for imaging metallic objects using an array of giant magnetoresistive sensors
Chaiken, Alison
2000-01-01
A portable, low-power, metallic object detector and method for providing an image of a detected metallic object. In one embodiment, the present portable low-power metallic object detector an array of giant magnetoresistive (GMR) sensors. The array of GMR sensors is adapted for detecting the presence of and compiling image data of a metallic object. In the embodiment, the array of GMR sensors is arranged in a checkerboard configuration such that axes of sensitivity of alternate GMR sensors are orthogonally oriented. An electronics portion is coupled to the array of GMR sensors. The electronics portion is adapted to receive and process the image data of the metallic object compiled by the array of GMR sensors. The embodiment also includes a display unit which is coupled to the electronics portion. The display unit is adapted to display a graphical representation of the metallic object detected by the array of GMR sensors. In so doing, a graphical representation of the detected metallic object is provided.
Acquisition and analysis of accelerometer data
NASA Astrophysics Data System (ADS)
Verges, Keith R.
1990-08-01
Acceleration data reduction must be undertaken with a complete understanding of the physical process, the means by which the data are acquired, and finally, the calculations necessary to put the data into a meaningful format. Discussed here are the acceleration sensor requirements dictated by the measurements desired. Sensor noise, dynamic range, and linearity will be determined from the physical parameters of the experiment. The digitizer requirements are discussed. Here the system from sensor to digital storage medium will be integrated, and rules of thumb for experiment duration, filter response, and number of bits are explained. Data reduction techniques after storage are also discussed. Time domain operations including decimating, digital filtering, and averaging are covered, as well as frequency domain methods, including windowing and the difference between power and amplitude spectra, and simple noise determination via coherence analysis. Finally, an example experiment using the Teledyne Geotech Model 44000 Seismometer to measure from 1 Hz to 10(exp -6) Hz is discussed. The sensor, data acquisition system, and example spectra are presented.
Acquisition and analysis of accelerometer data
NASA Technical Reports Server (NTRS)
Verges, Keith R.
1990-01-01
Acceleration data reduction must be undertaken with a complete understanding of the physical process, the means by which the data are acquired, and finally, the calculations necessary to put the data into a meaningful format. Discussed here are the acceleration sensor requirements dictated by the measurements desired. Sensor noise, dynamic range, and linearity will be determined from the physical parameters of the experiment. The digitizer requirements are discussed. Here the system from sensor to digital storage medium will be integrated, and rules of thumb for experiment duration, filter response, and number of bits are explained. Data reduction techniques after storage are also discussed. Time domain operations including decimating, digital filtering, and averaging are covered, as well as frequency domain methods, including windowing and the difference between power and amplitude spectra, and simple noise determination via coherence analysis. Finally, an example experiment using the Teledyne Geotech Model 44000 Seismometer to measure from 1 Hz to 10(exp -6) Hz is discussed. The sensor, data acquisition system, and example spectra are presented.
NASA Astrophysics Data System (ADS)
Kang, Ning
Nanomaterials have shown increasing applications in the design and fabrication of functional devices such as energy storage devices and sensor devices. A key challenge is the ability to harness the nanostructures in terms of size, shape, composition and structure so that the unique nanoscale functional properties can be exploited. This dissertation describes our findings in design, synthesis, and characterization of nanoparticles towards applications in two important fronts. The first involves the investigation of nanoalloy catalysts and functional nanoparticles for energy storage devices, including Li-air and Li-ion batteries, aiming at increasing the capacity and cycle performance. Part of this effort focuses on design of bifunctional nanocatalysts through alloying noble metal with non-noble transition metal to improve the ORR and OER activity of Li-air batteries. By manipulating the composition and alloying structure of the catalysts, synergetic effect has been demonstrated, which is substantiated by both experimental results and theoretical calculation for the charge/discharge process. The other part of the effort focuses on modification of Si nanoparticles towards high-capacity anode materials. The modification involved dopant elements, carbon coating, and graphene composite formation to manipulate the ability of the nanoparticles in accommodating the volume expansion. The second part focuses on the design, preparation and characterization of metal nanoparticles and nanocomposite materials for the application in flexible sensing devices. The investigation focuses on fabrication of a novel class of nanoparticle-nanofibrous membranes consisting of gold nanoparticles embedded in a multi-layered fibrous membrane as a tunable interfacial scaffold for flexible sweat sensors. Sensing responses to different ionic species in aqueous solutions and relative humidity changes in the environment were demonstrated, showing promising potential as flexible sensing devices for applications in wearable sweat sensors. Moreover, printing technique was also applied in the fabrication of conductive patterns as the sensing electrodes. The results shed new lights on the understanding of the structural tuning of the nanomaterials for the ultimate applications in advanced energy storage devices and chemical sensor devices.
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing
2012-01-01
In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.
A mobile ferromagnetic shape detection sensor using a Hall sensor array and magnetic imaging.
Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah
2011-01-01
This paper presents a mobile Hall sensor array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the mobile Hall sensor array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of mobile Hall sensor array system for actual shape detection. The results prove that the mobile Hall sensor array system is able to perform magnetic imaging in identifying various ferromagnetic materials.
A Mobile Ferromagnetic Shape Detection Sensor Using a Hall Sensor Array and Magnetic Imaging
Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah
2011-01-01
This paper presents a Mobile Hall Sensor Array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the Mobile Hall Sensor Array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of Mobile Hall Sensor Array system for actual shape detection. The results prove that the Mobile Hall Sensor Array system is able to perform magnetic imaging in identifying various ferromagnetic materials. PMID:22346653
Depth map generation using a single image sensor with phase masks.
Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki
2016-06-13
Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.
A Chip and Pixel Qualification Methodology on Imaging Sensors
NASA Technical Reports Server (NTRS)
Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank
2004-01-01
This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Linne von Berg, Dale; Davidson, Morgan; Holt, Niel; Kruer, Melvin; Wilson, Michael L.
2010-04-01
EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS). EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared (LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes of operation, and results from recent flight demonstrations.
A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.
A Study of NetCDF as an Approach for High Performance Medical Image Storage
NASA Astrophysics Data System (ADS)
Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.
2012-02-01
The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.
Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V
2010-12-01
We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.
Fingerprint enhancement using a multispectral sensor
NASA Astrophysics Data System (ADS)
Rowe, Robert K.; Nixon, Kristin A.
2005-03-01
The level of performance of a biometric fingerprint sensor is critically dependent on the quality of the fingerprint images. One of the most common types of optical fingerprint sensors relies on the phenomenon of total internal reflectance (TIR) to generate an image. Under ideal conditions, a TIR fingerprint sensor can produce high-contrast fingerprint images with excellent feature definition. However, images produced by the same sensor under conditions that include dry skin, dirt on the skin, and marginal contact between the finger and the sensor, are likely to be severely degraded. This paper discusses the use of multispectral sensing as a means to collect additional images with new information about the fingerprint that can significantly augment the system performance under both normal and adverse sample conditions. In the context of this paper, "multispectral sensing" is used to broadly denote a collection of images taken under different illumination conditions: different polarizations, different illumination/detection configurations, as well as different wavelength illumination. Results from three small studies using an early-stage prototype of the multispectral-TIR (MTIR) sensor are presented along with results from the corresponding TIR data. The first experiment produced data from 9 people, 4 fingers from each person and 3 measurements per finger under "normal" conditions. The second experiment provided results from a study performed to test the relative performance of TIR and MTIR images when taken under extreme dry and dirty conditions. The third experiment examined the case where the area of contact between the finger and sensor is greatly reduced.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-846] Certain CMOS Image Sensors and..., the sale for importation, and the sale within the United States after importation of certain CMOS image sensors and products containing same by reason of infringement of certain claims of U.S. Patent No...
NASA Astrophysics Data System (ADS)
Masuzawa, Tomoaki; Neo, Yoichiro; Mimura, Hidenori; Okamoto, Tamotsu; Nagao, Masayoshi; Akiyoshi, Masafumi; Sato, Nobuhiro; Takagi, Ikuji; Tsuji, Hiroshi; Gotoh, Yasuhito
2016-10-01
A growing demand on incident detection is recognized since the Great East Japan Earthquake and successive accidents in Fukushima nuclear power plant in 2011. Radiation tolerant image sensors are powerful tools to collect crucial information at initial stages of such incidents. However, semiconductor based image sensors such as CMOS and CCD have limited tolerance to radiation exposure. Image sensors used in nuclear facilities are conventional vacuum tubes using thermal cathodes, which have large size and high power consumption. In this study, we propose a compact image sensor composed of a CdTe-based photodiode and a matrix-driven Spindt-type electron beam source called field emitter array (FEA). A basic principle of FEA-based image sensors is similar to conventional Vidicon type camera tubes, but its electron source is replaced from a thermal cathode to FEA. The use of a field emitter as an electron source should enable significant size reduction while maintaining high radiation tolerance. Current researches on radiation tolerant FEAs and development of CdTe based photoconductive films will be presented.
Holographic leaky-wave metasurfaces for dual-sensor imaging.
Li, Yun Bo; Li, Lian Lin; Cai, Ben Geng; Cheng, Qiang; Cui, Tie Jun
2015-12-10
Metasurfaces have huge potentials to develop new type imaging systems due to their abilities of controlling electromagnetic waves. Here, we propose a new method for dual-sensor imaging based on cross-like holographic leaky-wave metasurfaces which are composed of hybrid isotropic and anisotropic surface impedance textures. The holographic leaky-wave radiations are generated by special impedance modulations of surface waves excited by the sensor ports. For one independent sensor, the main leaky-wave radiation beam can be scanned by frequency in one-dimensional space, while the frequency scanning in the orthogonal spatial dimension is accomplished by the other sensor. Thus, for a probed object, the imaging plane can be illuminated adequately to obtain the two-dimensional backward scattered fields by the dual-sensor for reconstructing the object. The relativity of beams under different frequencies is very low due to the frequency-scanning beam performance rather than the random beam radiations operated by frequency, and the multi-illuminations with low relativity are very appropriate for multi-mode imaging method with high resolution and anti- noise. Good reconstruction results are given to validate the proposed imaging method.
A bio-image sensor for simultaneous detection of multi-neurotransmitters.
Lee, You-Na; Okumura, Koichi; Horio, Tomoko; Iwata, Tatsuya; Takahashi, Kazuhiro; Hattori, Toshiaki; Sawada, Kazuaki
2018-03-01
We report here a new bio-image sensor for simultaneous detection of spatial and temporal distribution of multi-neurotransmitters. It consists of multiple enzyme-immobilized membranes on a 128 × 128 pixel array with read-out circuit. Apyrase and acetylcholinesterase (AChE), as selective elements, are used to recognize adenosine 5'-triphosphate (ATP) and acetylcholine (ACh), respectively. To enhance the spatial resolution, hydrogen ion (H + ) diffusion barrier layers are deposited on top of the bio-image sensor and demonstrated their prevention capability. The results are used to design the space among enzyme-immobilized pixels and the null H + sensor to minimize the undesired signal overlap by H + diffusion. Using this bio-image sensor, we can obtain H + diffusion-independent imaging of concentration gradients of ATP and ACh in real-time. The sensing characteristics, such as sensitivity and detection of limit, are determined experimentally. With the proposed bio-image sensor the possibility exists for customizable monitoring of the activities of various neurochemicals by using different kinds of proton-consuming or generating enzymes. Copyright © 2017 Elsevier B.V. All rights reserved.
MTF evaluation of white pixel sensors
NASA Astrophysics Data System (ADS)
Lindner, Albrecht; Atanassov, Kalin; Luo, Jiafu; Goma, Sergio
2015-01-01
We present a methodology to compare image sensors with traditional Bayer RGB layouts to sensors with alternative layouts containing white pixels. We focused on the sensors' resolving powers, which we measured in the form of a modulation transfer function for variations in both luma and chroma channels. We present the design of the test chart, the acquisition of images, the image analysis, and an interpretation of results. We demonstrate the approach at the example of two sensors that only differ in their color filter arrays. We confirmed that the sensor with white pixels and the corresponding demosaicing result in a higher resolving power in the luma channel, but a lower resolving power in the chroma channels when compared to the traditional Bayer sensor.
Optical flows method for lightweight agile remote sensor design and instrumentation
NASA Astrophysics Data System (ADS)
Wang, Chong; Xing, Fei; Wang, Hongjian; You, Zheng
2013-08-01
Lightweight agile remote sensors have become one type of the most important payloads and were widely utilized in space reconnaissance and resource survey. These imaging sensors are designed to obtain the high spatial, temporary and spectral resolution imageries. Key techniques in instrumentation include flexible maneuvering, advanced imaging control algorithms and integrative measuring techniques, which are closely correlative or even acting as the bottle-necks for each other. Therefore, mutual restrictive problems must be solved and optimized. Optical flow is the critical model which to be fully represented in the information transferring as well as radiation energy flowing in dynamic imaging. For agile sensors, especially with wide-field-of view, imaging optical flows may distort and deviate seriously when they perform large angle attitude maneuvering imaging. The phenomena are mainly attributed to the geometrical characteristics of the three-dimensional earth surface as well as the coupled effects due to the complicated relative motion between the sensor and scene. Under this circumstance, velocity fields distribute nonlinearly, the imageries may badly be smeared or probably the geometrical structures are changed since the image velocity matching errors are not having been eliminated perfectly. In this paper, precise imaging optical flow model is established for agile remote sensors, for which optical flows evolving is factorized by two forms, which respectively due to translational movement and image shape changing. Moreover, base on that, agile remote sensors instrumentation was investigated. The main techniques which concern optical flow modeling include integrative design with lightweight star sensors along with micro inertial measurement units and corresponding data fusion, the assemblies of focal plane layout and control, imageries post processing for agile remote sensors etc. Some experiments show that the optical analyzing method is effective to eliminate the limitations for the performance indexes, and succeeded to be applied for integrative system design. Finally, a principle prototype of agile remote sensor designed by the method is discussed.
Analysis of simulated image sequences from sensors for restricted-visibility operations
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar
1991-01-01
A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.
Fully wireless pressure sensor based on endoscopy images
NASA Astrophysics Data System (ADS)
Maeda, Yusaku; Mori, Hirohito; Nakagawa, Tomoaki; Takao, Hidekuni
2018-04-01
In this paper, the result of developing a fully wireless pressure sensor based on endoscopy images for an endoscopic surgery is reported for the first time. The sensor device has structural color with a nm-scale narrow gap, and the gap is changed by air pressure. The structural color of the sensor is acquired from camera images. Pressure detection can be realized with existing endoscope configurations only. The inner air pressure of the human body should be measured under flexible-endoscope operation using the sensor. Air pressure monitoring, has two important purposes. The first is to quantitatively measure tumor size under a constant air pressure for treatment selection. The second purpose is to prevent the endangerment of a patient due to over transmission of air. The developed sensor was evaluated, and the detection principle based on only endoscopy images has been successfully demonstrated.
CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.
Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V
2011-04-01
In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.
Satellite Testbed for Evaluating Cryogenic-Liquid Behavior in Microgravity
NASA Technical Reports Server (NTRS)
Putman, Philip Travis (Inventor)
2017-01-01
Provided is a testbed for conducting an experiment on a substance in a cryogenic liquid state in a microgravity environment. The testbed includes a frame with rectangular nominal dimensions, and a source section including a supply of the substance to be evaluated in the cryogenic liquid state. An experiment section includes an experiment vessel in fluid communication with the storage section to receive the substance from the storage section and condense the substance into the cryogenic liquid state. A sensor is adapted to sense a property of the substance in the cryogenic liquid state in the experiment vessel as part of the experiment. A bus section includes a controller configured to control delivery of the substance from the storage section to the experiment vessel, and receive property data indicative of the property sensed by the sensor for subsequent evaluation on Earth.
Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun
2016-01-01
The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system. PMID:27025907
Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun
2016-03-30
The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system.
Developing a structural health monitoring system for nuclear dry cask storage canister
NASA Astrophysics Data System (ADS)
Sun, Xiaoyi; Lin, Bin; Bao, Jingjing; Giurgiutiu, Victor; Knight, Travis; Lam, Poh-Sang; Yu, Lingyu
2015-03-01
Interim storage of spent nuclear fuel from reactor sites has gained additional importance and urgency for resolving waste-management-related technical issues. In total, there are over 1482 dry cask storage system (DCSS) in use at US plants, storing 57,807 fuel assemblies. Nondestructive material condition monitoring is in urgent need and must be integrated into the fuel cycle to quantify the "state of health", and more importantly, to guarantee the safe operation of radioactive waste storage systems (RWSS) during their extended usage period. A state-of-the-art nuclear structural health monitoring (N-SHM) system based on in-situ sensing technologies that monitor material degradation and aging for nuclear spent fuel DCSS and similar structures is being developed. The N-SHM technology uses permanently installed low-profile piezoelectric wafer sensors to perform long-term health monitoring by strategically using a combined impedance (EMIS), acoustic emission (AE), and guided ultrasonic wave (GUW) approach, called "multimode sensing", which is conducted by the same network of installed sensors activated in a variety of ways. The system will detect AE events resulting from crack (case for study in this project) and evaluate the damage evolution; when significant AE is detected, the sensor network will switch to the GUW mode to perform damage localization, and quantification as well as probe "hot spots" that are prone to damage for material degradation evaluation using EMIS approach. The N-SHM is expected to eventually provide a systematic methodology for assessing and monitoring nuclear waste storage systems without incurring human radiation exposure.
Design of multi-function sensor detection system in coal mine based on ARM
NASA Astrophysics Data System (ADS)
Ge, Yan-Xiang; Zhang, Quan-Zhu; Deng, Yong-Hong
2017-06-01
The traditional coal mine sensor in the specific measurement points, the number and type of channel will be greater than or less than the number of monitoring points, resulting in a waste of resources or cannot meet the application requirements, in order to enable the sensor to adapt to the needs of different occasions and reduce the cost, a kind of multi-functional intelligent sensor multiple sensors and ARM11 the S3C6410 processor is used to design and realize the dust, gas, temperature and humidity sensor functions together, and has storage, display, voice, pictures, data query, alarm and other new functions.
CMOS image sensor-based immunodetection by refractive-index change.
Devadhasan, Jasmine P; Kim, Sanghyo
2012-01-01
A complementary metal oxide semiconductor (CMOS) image sensor is an intriguing technology for the development of a novel biosensor. Indeed, the CMOS image sensor mechanism concerning the detection of the antigen-antibody (Ag-Ab) interaction at the nanoscale has been ambiguous so far. To understand the mechanism, more extensive research has been necessary to achieve point-of-care diagnostic devices. This research has demonstrated a CMOS image sensor-based analysis of cardiovascular disease markers, such as C-reactive protein (CRP) and troponin I, Ag-Ab interactions on indium nanoparticle (InNP) substrates by simple photon count variation. The developed sensor is feasible to detect proteins even at a fg/mL concentration under ordinary room light. Possible mechanisms, such as dielectric constant and refractive-index changes, have been studied and proposed. A dramatic change in the refractive index after protein adsorption on an InNP substrate was observed to be a predominant factor involved in CMOS image sensor-based immunoassay.
Liu, Xingjian; Liang, Junbin; Li, Ran; Ma, Wenpeng; Qi, Chuanda
2018-01-01
A novel network paradigm of mobile edge computing, namely TMWSNs (two-tiered mobile wireless sensor networks), has just been proposed by researchers in recent years for its high scalability and robustness. However, only a few works have considered the security of TMWSNs. In fact, the storage nodes, which are located at the upper layer of TMWSNs, are prone to being attacked by the adversaries because they play a key role in bridging both the sensor nodes and the sink, which may lead to the disclosure of all data stored on them as well as some other potentially devastating results. In this paper, we make a comparative study on two typical schemes, EVTopk and VTMSN, which have been proposed recently for securing Top-k queries in TMWSNs, through both theoretical analysis and extensive simulations, aiming at finding out their disadvantages and advancements. We find that both schemes unsatisfactorily raise communication costs. Specifically, the extra communication cost brought about by transmitting the proof information uses up more than 40% of the total communication cost between the sensor nodes and the storage nodes, and 80% of that between the storage nodes and the sink. We discuss the corresponding reasons and present our suggestions, hoping that it will inspire the researchers researching this subject. PMID:29543745
High rate science data handling on Space Station Freedom
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Masline, Richard C.
1990-01-01
A study by NASA's User Information System Working Group for Space Station Freedom (SSF) has determined that the proposed onboard Data Management System, as initially configured, will be incapable of handling the data-generation rates typical of numerous scientific sensor payloads; many of these generate data at rates in excess of 10 Mbps, and there are at least four cases of rates in excess of 300 Mbps. The SSF Working Group has accordingly suggested an alternative conceptual architecture based on technology expected to achieve space-qualified status by 1995. The architecture encompasses recorders with rapid data-ingest capabilities and massive storage capabilities, optical delay lines allowing the recording of only the phenomena of interest, and data flow-compressing image processors.
Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications
Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun
2010-01-01
In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors’ architecture on the basis of the type of electric measurement or imaging functionalities. PMID:28879978
Practical design and evaluation methods of omnidirectional vision sensors
NASA Astrophysics Data System (ADS)
Ohte, Akira; Tsuzuki, Osamu
2012-01-01
A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.
Principles and Applications of Imaging Radar, Manual of Remote Sensing, 3rd Edition, Volume 2
NASA Astrophysics Data System (ADS)
Moran, M. Susan
Aerial photographs and digital images from orbiting optical scanners are a daily source of information for the general public through newspapers, television, magazines, and posters. Such images are just as prevalent in scientific journal literature. In the last 6 months, more than half of the weekly issues of Eos published an image acquired by a remote digital sensor. As a result, most geoscientists are familiar with the characteristics and even the acronyms of the current satellites and their optical sensors, common detector filters, and image presentation. In many cases, this familiarity has bred contempt. This is so because the limitations of optical sensors (imaging in the visible and infrared portions of the electromagnetic spectrum) can be quite formidable. Images of the surface cannot be acquired through clouds, and image quality is impaired with low-light conditions (such as at polar regions), atmospheric scattering and absorption, and variations in sun/sensor/surface geometry.
Positron emission imaging device and method of using the same
Bingham, Philip R.; Mullens, James Allen
2013-01-15
An imaging system and method of imaging are disclosed. The imaging system can include an external radiation source producing pairs of substantially simultaneous radiation emissions of a picturization emission and a verification emissions at an emission angle. The imaging system can also include a plurality of picturization sensors and at least one verification sensor for detecting the picturization and verification emissions, respectively. The imaging system also includes an object stage is arranged such that a picturization emission can pass through an object supported on said object stage before being detected by one of said plurality of picturization sensors. A coincidence system and a reconstruction system can also be included. The coincidence can receive information from the picturization and verification sensors and determine whether a detected picturization emission is direct radiation or scattered radiation. The reconstruction system can produce a multi-dimensional representation of an object imaged with the imaging system.
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
NASA Technical Reports Server (NTRS)
2012-01-01
Topics covered include: Instrument Suite for Vertical Characterization of the Ionosphere-Thermosphere System; Terahertz Radiation Heterodyne Detector Using Two-Dimensional Electron Gas in a GaN Heterostructure; Pattern Recognition Algorithm for High-Sensitivity Odorant Detection in Unknown Environments; Determining Performance Acceptability of Electrochemical Oxygen Sensors; Versatile Controller for Infrared Lamp and Heater Arrays; High-Speed Scanning Interferometer Using CMOS Image Sensor and FPGA Based on Multifrequency Phase-Tracking Detection; Ultra-Low-Power MEMS Selective Gas Sensors; Compact Receiver Front Ends for Submillimeter-Wave Applications; Dynamically Reconfigurable Systolic Array Accelerator; Blocking Losses With a Photon Counter; Motion-Capture-Enabled Software for Gestural Control of 3D Mod; Orbit Software Suite; CoNNeCT Baseband Processor Module Boot Code SoftWare (BCSW); Trajectory Software With Upper Atmosphere Model; ALSSAT Version 6.0; Employing a Grinding Technology to Assess the Microbial Density for Encapsulated Organisms; Demonstration of Minimally Machined Honeycomb Silicon Carbide Mirrors; Polyimide Aerogel Thin Films; Nanoengineered Thermal Materials Based on Carbon Nanotube Array Composites; Composite Laminate With Coefficient of Thermal Expansion Matching D263 Glass; Robust Tensioned Kevlar Suspension Design; Focal Plane Alignment Utilizing Optical CMM; Purifying, Separating, and Concentrating Cells From a Sample Low in Biomass; Virtual Ultrasound Guidance for Inexperienced Operators; Beat-to-Beat Blood Pressure Monitor; Non-Contact Conductivity Measurement for Automated Sample Processing Systems; An MSK Radar Waveform; Telescope Alignment From Sparsely Sampled Wavefront Measurements Over Pupil Subapertures; Method to Remove Particulate Matter from Dusty Gases at Low Pressures; Terahertz Quantum Cascade Laser With Efficient Coupling and Beam Profile; Measurement Via Optical Near-Nulling and Subaperture Stitching; 885-nm Pumped Ceramic Nd:YAG Master Oscillator Power Amplifier Laser System; Airborne Hyperspectral Imaging System; Heat Shield Employing Cured Thermal Protection Material Blocks Bonded in a Large-Cell Honeycomb Matrix; and Asymmetric Supercapacitor for Long-Duration Power Storage.
NASA Tech Briefs, January 2003
NASA Technical Reports Server (NTRS)
2003-01-01
Topics covered include: Optoelectronic Tool Adds Scale Marks to Photographic Images; Compact Interconnection Networks Based on Quantum Dots; Laterally Coupled Quantum-Dot Distributed-Feedback Lasers; Bit-Serial Adder Based on Quantum Dots; Stabilized Fiber-Optic Distribution of Reference Frequency; Delay/Doppler-Mapping GPS-Reflection Remote-Sensing System; Ladar System Identifies Obstacles Partly Hidden by Grass; Survivable Failure Data Recorders for Spacecraft; Fiber-Optic Ammonia Sensors; Silicon Membrane Mirrors with Electrostatic Shape Actuators; Nanoscale Hot-Wire Probes for Boundary-Layer Flows; Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing; Efficient Coupling of Lasers to Telescopes with Obscuration; Aligning Three Off-Axis Mirrors with Help of a DOE; Calibrating Laser Gas Measurements by Use of Natural CO2; Laser Ranging Simulation Program; Micro-Ball-Lens Optical Switch Driven by SMA Actuator; Evaluation of Charge Storage and Decay in Spacecraft Insulators; Alkaline Capacitors Based on Nitride Nanoparticles; Low-EC-Content Electrolytes for Low-Temperature Li-Ion Cells; Software for a GPS-Reflection Remote-Sensing System; Software for Building Models of 3D Objects via the Internet; "Virtual Cockpit Window" for a Windowless Aerospacecraft; CLARAty Functional-Layer Software; Java Library for Input and Output of Image Data and Metadata; Software for Estimating Costs of Testing Rocket Engines; Energy-Absorbing, Lightweight Wheels; Viscoelastic Vibration Dampers for Turbomachine Blades; Soft Landing of Spacecraft on Energy-Absorbing Self-Deployable Cushions; Pneumatically Actuated Miniature Peristaltic Vacuum Pumps; Miniature Gas-Turbine Power Generator; Pressure-Sensor Assembly Technique; Wafer-Level Membrane-Transfer Process for Fabricating MEMS; A Reactive-Ion Etch for Patterning Piezoelectric Thin Film; Wavelet-Based Real-Time Diagnosis of Complex Systems; Quantum Search in Hilbert Space; Analytic Method for Computing Instrument Pointing Jitter; and Semiselective Optoelectronic Sensors for Monitoring Microbes.
Automated assembly of camera modules using active alignment with up to six degrees of freedom
NASA Astrophysics Data System (ADS)
Bräuniger, K.; Stickler, D.; Winters, D.; Volmer, C.; Jahn, M.; Krey, S.
2014-03-01
With the upcoming Ultra High Definition (UHD) cameras, the accurate alignment of optical systems with respect to the UHD image sensor becomes increasingly important. Even with a perfect objective lens, the image quality will deteriorate when it is poorly aligned to the sensor. For evaluating the imaging quality the Modulation Transfer Function (MTF) is used as the most accepted test. In the first part it is described how the alignment errors that lead to a low imaging quality can be measured. Collimators with crosshair at defined field positions or a test chart are used as object generators for infinite-finite or respectively finite-finite conjugation. The process how to align the image sensor accurately to the optical system will be described. The focus position, shift, tilt and rotation of the image sensor are automatically corrected to obtain an optimized MTF for all field positions including the center. The software algorithm to grab images, calculate the MTF and adjust the image sensor in six degrees of freedom within less than 30 seconds per UHD camera module is described. The resulting accuracy of the image sensor rotation is better than 2 arcmin and the accuracy position alignment in x,y,z is better 2 μm. Finally, the process of gluing and UV-curing is described and how it is managed in the integrated process.
Fast range estimation based on active range-gated imaging for coastal surveillance
NASA Astrophysics Data System (ADS)
Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang
2012-11-01
Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.
NASA Technical Reports Server (NTRS)
Vanderspiegel, Jan
1994-01-01
This report surveys different technologies and approaches to realize sensors for image warping. The goal is to study the feasibility, technical aspects, and limitations of making an electronic camera with special geometries which implements certain transformations for image warping. This work was inspired by the research done by Dr. Juday at NASA Johnson Space Center on image warping. The study has looked into different solid-state technologies to fabricate image sensors. It is found that among the available technologies, CMOS is preferred over CCD technology. CMOS provides more flexibility to design different functions into the sensor, is more widely available, and is a lower cost solution. By using an architecture with row and column decoders one has the added flexibility of addressing the pixels at random, or read out only part of the image.
Konduru, Tharun; Rains, Glen C; Li, Changying
2015-01-12
A gas sensor array, consisting of seven Metal Oxide Semiconductor (MOS) sensors that are sensitive to a wide range of organic volatile compounds was developed to detect rotten onions during storage. These MOS sensors were enclosed in a specially designed Teflon chamber equipped with a gas delivery system to pump volatiles from the onion samples into the chamber. The electronic circuit mainly comprised a microcontroller, non-volatile memory chip, and trickle-charge real time clock chip, serial communication chip, and parallel LCD panel. User preferences are communicated with the on-board microcontroller through a graphical user interface developed using LabVIEW. The developed gas sensor array was characterized and the discrimination potential was tested by exposing it to three different concentrations of acetone (ketone), acetonitrile (nitrile), ethyl acetate (ester), and ethanol (alcohol). The gas sensor array could differentiate the four chemicals of same concentrations and different concentrations within the chemical with significant difference. Experiment results also showed that the system was able to discriminate two concentrations (196 and 1964 ppm) of methlypropyl sulfide and two concentrations (145 and 1452 ppm) of 2-nonanone, two key volatile compounds emitted by rotten onions. As a proof of concept, the gas sensor array was able to achieve 89% correct classification of sour skin infected onions. The customized low-cost gas sensor array could be a useful tool to detect onion postharvest diseases in storage.
Konduru, Tharun; Rains, Glen C.; Li, Changying
2015-01-01
A gas sensor array, consisting of seven Metal Oxide Semiconductor (MOS) sensors that are sensitive to a wide range of organic volatile compounds was developed to detect rotten onions during storage. These MOS sensors were enclosed in a specially designed Teflon chamber equipped with a gas delivery system to pump volatiles from the onion samples into the chamber. The electronic circuit mainly comprised a microcontroller, non-volatile memory chip, and trickle-charge real time clock chip, serial communication chip, and parallel LCD panel. User preferences are communicated with the on-board microcontroller through a graphical user interface developed using LabVIEW. The developed gas sensor array was characterized and the discrimination potential was tested by exposing it to three different concentrations of acetone (ketone), acetonitrile (nitrile), ethyl acetate (ester), and ethanol (alcohol). The gas sensor array could differentiate the four chemicals of same concentrations and different concentrations within the chemical with significant difference. Experiment results also showed that the system was able to discriminate two concentrations (196 and 1964 ppm) of methlypropyl sulfide and two concentrations (145 and 1452 ppm) of 2-nonanone, two key volatile compounds emitted by rotten onions. As a proof of concept, the gas sensor array was able to achieve 89% correct classification of sour skin infected onions. The customized low-cost gas sensor array could be a useful tool to detect onion postharvest diseases in storage. PMID:25587975
Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa
2016-08-08
We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.
Plant stress analysis technology deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebadian, M.A.
1998-01-01
Monitoring vegetation is an active area of laser-induced fluorescence imaging (LIFI) research. The Hemispheric Center for Environmental Technology (HCET) at Florida International University (FIU) is assisting in the transfer of the LIFI technology to the agricultural private sector through a market survey. The market survey will help identify the key eco-agricultural issues of the nations that could benefit from the use of sensor technologies developed by the Office of Science and Technology (OST). The principal region of interest is the Western Hemisphere, particularly, the rapidly growing countries of Latin America and the Caribbean. The analysis of needs will assure thatmore » the focus of present and future research will center on economically important issues facing both hemispheres. The application of the technology will be useful to the agriculture industry for airborne crop analysis as well as in the detection and characterization of contaminated sites by monitoring vegetation. LIFI airborne and close-proximity systems will be evaluated as stand-alone technologies and additions to existing sensor technologies that have been used to monitor crops in the field and in storage.« less
Miniaturized Airborne Imaging Central Server System
NASA Technical Reports Server (NTRS)
Sun, Xiuhong
2011-01-01
In recent years, some remote-sensing applications require advanced airborne multi-sensor systems to provide high performance reflective and emissive spectral imaging measurement rapidly over large areas. The key or unique problem of characteristics is associated with a black box back-end system that operates a suite of cutting-edge imaging sensors to collect simultaneously the high throughput reflective and emissive spectral imaging data with precision georeference. This back-end system needs to be portable, easy-to-use, and reliable with advanced onboard processing. The innovation of the black box backend is a miniaturized airborne imaging central server system (MAICSS). MAICSS integrates a complex embedded system of systems with dedicated power and signal electronic circuits inside to serve a suite of configurable cutting-edge electro- optical (EO), long-wave infrared (LWIR), and medium-wave infrared (MWIR) cameras, a hyperspectral imaging scanner, and a GPS and inertial measurement unit (IMU) for atmospheric and surface remote sensing. Its compatible sensor packages include NASA s 1,024 1,024 pixel LWIR quantum well infrared photodetector (QWIP) imager; a 60.5 megapixel BuckEye EO camera; and a fast (e.g. 200+ scanlines/s) and wide swath-width (e.g., 1,920+ pixels) CCD/InGaAs imager-based visible/near infrared reflectance (VNIR) and shortwave infrared (SWIR) imaging spectrometer. MAICSS records continuous precision georeferenced and time-tagged multisensor throughputs to mass storage devices at a high aggregate rate, typically 60 MB/s for its LWIR/EO payload. MAICSS is a complete stand-alone imaging server instrument with an easy-to-use software package for either autonomous data collection or interactive airborne operation. Advanced multisensor data acquisition and onboard processing software features have been implemented for MAICSS. With the onboard processing for real time image development, correction, histogram-equalization, compression, georeference, and data organization, fast aerial imaging applications, including the real time LWIR image mosaic for Google Earth, have been realized for NASA fs LWIR QWIP instrument. MAICSS is a significant improvement and miniaturization of current multisensor technologies. Structurally, it has a complete modular and solid-state design. Without rotating hard drives and other moving parts, it is operational at high altitudes and survivable in high-vibration environments. It is assembled from a suite of miniaturized, precision-machined, standardized, and stackable interchangeable embedded instrument modules. These stackable modules can be bolted together with the interconnection wires inside for the maximal simplicity and portability. Multiple modules are electronically interconnected as stacked. Alternatively, these dedicated modules can be flexibly distributed to fit the space constraints of a flying vehicle. As a flexibly configurable system, MAICSS can be tailored to interface a variety of multisensor packages. For example, with a 1,024x1,024 pixel LWIR and a 8,984x6,732 pixel EO payload, the complete MAICSS volume is approximately 7x9x11 in. (=18x23x28 cm), with a weight of 25 lb (=11.4 kg).
Resolution Enhancement of Hyperion Hyperspectral Data using Ikonos Multispectral Data
2007-09-01
spatial - resolution hyperspectral image to produce a sharpened product. The result is a product that has the spectral properties of the ...multispectral sensors. In this work, we examine the benefits of combining data from high- spatial - resolution , low- spectral - resolution spectral imaging...sensors with data obtained from high- spectral - resolution , low- spatial - resolution spectral imaging sensors.
Early results from NASA's SnowEx campaign
NASA Astrophysics Data System (ADS)
Kim, Edward; Gatebe, Charles; Hall, Dorothy; Misakonis, Amy; Elder, Kelly; Marshall, Hans Peter; Hiemstra, Chris; Brucker, Ludovic; Crawford, Chris; Kang, Do Hyuk; De Marco, Eugenia; Beckley, Matt; Entin, Jared
2017-04-01
SnowEx is a multi-year airborne snow campaign with the primary goal of addressing the question: How much water is stored in Earth's terrestrial snow-covered regions? Year 1 (2016-17) focuses on the distribution of snow-water equivalent (SWE) and the snow energy balance in a forested environment. The year 1 primary site is Grand Mesa and the secondary site is the Senator Beck Basin, both in western, Colorado, USA. Ten core sensors on four core aircraft will make observations using a broad suite of airborne sensors including active and passive microwave, and active and passive optical/infrared sensing techniques to determine the sensitivity and accuracy of these potential satellite remote sensing techniques, along with models, to measure snow under a range of forest conditions. SnowEx also includes an extensive range of ground truth measurements—in-situ samples, snow pits, ground based remote sensing measurements, and sophisticated new techniques. A detailed description of the data collected will be given and some early results will be presented. Seasonal snow cover is the largest single component of the cryosphere in areal extent (covering an average of 46M km2 of Earth's surface (31 % of land areas) each year). This seasonal snow has major societal impacts in the areas of water resources, natural hazards (floods and droughts), water security, and weather and climate. The only practical way to estimate the quantity of snow on a consistent global basis is through satellites. Yet, current space-based techniques underestimate storage of snow water equivalent (SWE) by as much as 50%, and model-based estimates can differ greatly vs. estimates based on remotely-sensed observations. At peak coverage, as much as half of snow-covered terrestrial areas involve forested areas, so quantifying the challenge represented by forests is important to plan any future snow mission. Single-sensor approaches may work for certain snow types and certain conditions, but not for others. Snow simply varies too much. Thus, the snow community consensus is that a multi-sensor approach is needed to adequately address global snow, combined with modeling and data assimilation. What remains at issue, then, is how best to combine and use the various sensors in an optimal way. That requires field measurements. NASA's SnowEx airborne campaign is designed to do exactly that. A list of core sensors is as follows. All are from NASA unless otherwise noted. • Radar (volume scattering): European Space Agency's SnowSAR, operated by MetaSensing • Lidar & hyperspectral imager: Airborne Snow Observatory (ASO) • Passive microwave: Airborne Earth Science Microwave Imaging Radiometer (AESMIR) • Bi-directional Reflectance Function (BRDF): the Cloud Absorption Radiometer (CAR) • Thermal Infrared imager • Thermal infrared non-imager from U. Washington • Video camera The ASO suite flew on a King Air, and the other sensors flew on a Navy P-3. In addition, two NASA radars flew on G-III aircraft to test more experimental retrieval techniques: • InSAR altimetry: Glacier and Ice Surface Topography Interferometer (GLISTIN-A) • Radar phase delay: Uninhabited Aerial Vehicle Synthetic Aperture Radar, (UAVSAR)
Design and fabrication of vertically-integrated CMOS image sensors.
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.
An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor
NASA Astrophysics Data System (ADS)
Liscombe, Michael
3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.
Design and Fabrication of Vertically-Integrated CMOS Image Sensors
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860
Illumination adaptation with rapid-response color sensors
NASA Astrophysics Data System (ADS)
Zhang, Xinchi; Wang, Quan; Boyer, Kim L.
2014-09-01
Smart lighting solutions based on imaging sensors such as webcams or time-of-flight sensors suffer from rising privacy concerns. In this work, we use low-cost non-imaging color sensors to measure local luminous flux of different colors in an indoor space. These sensors have much higher data acquisition rate and are much cheaper than many o_-the-shelf commercial products. We have developed several applications with these sensors, including illumination feedback control and occupancy-driven lighting.
Transmission and storage of medical images with patient information.
Acharya U, Rajendra; Subbanna Bhat, P; Kumar, Sathish; Min, Lim Choo
2003-07-01
Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The text data is encrypted before interleaving with images to ensure greater security. The graphical signals are interleaved with the image. Two types of error control-coding techniques are proposed to enhance reliability of transmission and storage of medical images interleaved with patient information. Transmission and storage scenarios are simulated with and without error control coding and a qualitative as well as quantitative interpretation of the reliability enhancement resulting from the use of various commonly used error control codes such as repetitive, and (7,4) Hamming code is provided.
Vision communications based on LED array and imaging sensor
NASA Astrophysics Data System (ADS)
Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
Clegg, G; Roebuck, S; Steedman, D
2001-01-01
Objectives—To develop a computer based storage system for clinical images—radiographs, photographs, ECGs, text—for use in teaching, training, reference and research within an accident and emergency (A&E) department. Exploration of methods to access and utilise the data stored in the archive. Methods—Implementation of a digital image archive using flatbed scanner and digital camera as capture devices. A sophisticated coding system based on ICD 10. Storage via an "intelligent" custom interface. Results—A practical solution to the problems of clinical image storage for teaching purposes. Conclusions—We have successfully developed a digital image capture and storage system, which provides an excellent teaching facility for a busy A&E department. We have revolutionised the practice of the "hand-over meeting". PMID:11435357
Prior-Based Quantization Bin Matching for Cloud Storage of JPEG Images.
Liu, Xianming; Cheung, Gene; Lin, Chia-Wen; Zhao, Debin; Gao, Wen
2018-07-01
Millions of user-generated images are uploaded to social media sites like Facebook daily, which translate to a large storage cost. However, there exists an asymmetry in upload and download data: only a fraction of the uploaded images are subsequently retrieved for viewing. In this paper, we propose a cloud storage system that reduces the storage cost of all uploaded JPEG photos, at the expense of a controlled increase in computation mainly during download of requested image subset. Specifically, the system first selectively re-encodes code blocks of uploaded JPEG images using coarser quantization parameters for smaller storage sizes. Then during download, the system exploits known signal priors-sparsity prior and graph-signal smoothness prior-for reverse mapping to recover original fine quantization bin indices, with either deterministic guarantee (lossless mode) or statistical guarantee (near-lossless mode). For fast reverse mapping, we use small dictionaries and sparse graphs that are tailored for specific clusters of similar blocks, which are classified via tree-structured vector quantizer. During image upload, cluster indices identifying the appropriate dictionaries and graphs for the re-quantized blocks are encoded as side information using a differential distributed source coding scheme to facilitate reverse mapping during image download. Experimental results show that our system can reap significant storage savings (up to 12.05%) at roughly the same image PSNR (within 0.18 dB).
A Real Time System for Multi-Sensor Image Analysis through Pyramidal Segmentation
1992-01-30
A Real Time Syte for M~ulti- sensor Image Analysis S. E I0 through Pyramidal Segmentation/ / c •) L. Rudin, S. Osher, G. Koepfler, J.9. Morel 7. ytu...experiments with reconnaissance photography, multi- sensor satellite imagery, medical CT and MRI multi-band data have shown a great practi- cal potential...C ,SF _/ -- / WSM iS-I-0-d41-40450 $tltwt, kw" I (nor.- . Z-97- A real-time system for multi- sensor image analysis through pyramidal segmentation
Single stage AC-DC converter for Galfenol-based micro-power energy harvesters
NASA Astrophysics Data System (ADS)
Cavaroc, Peyton; Curtis, Chandra; Naik, Suketu; Cooper, James
2014-06-01
Military based sensor systems are often hindered in operational deployment and/or other capabilities due to limitations in their energy storage elements. Typically operating from lithium based batteries, there is a finite amount of stored energy which the sensor can use to collect and transmit data. As a result, the sensors have reduced sensing and transmission rates. However, coupled with the latest advancements in energy harvesting, these sensors could potentially operate at standard sensing and transition rates as well as dramatically extend lifetimes. Working with the magnetostrictive material Galfenol, we demonstrate the production of enough energy to supplement and recharge a solid state battery thereby overcoming the deficiencies faced by unattended sensors. As with any vibration-based energy harvester, this solution produces an alternating current which needs to be rectified and boosted to a level conducive to recharge the storage element. This paper presents a power converter capable of efficiently converting an ultra-low AC voltage to a solid state charging voltage of 4.1VDC. While we are working with Galfenol transducers as our energy source, this converter may also be applied with any AC producing energy harvester, particularly at operating levels less than 2mW and 200mVAC.
Active pixel sensors with substantially planarized color filtering elements
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor)
1999-01-01
A semiconductor imaging system preferably having an active pixel sensor array compatible with a CMOS fabrication process. Color-filtering elements such as polymer filters and wavelength-converting phosphors can be integrated with the image sensor.
NASA Astrophysics Data System (ADS)
Ninsawat, Sarawut; Yamamoto, Hirokazu; Kamei, Akihide; Nakamura, Ryosuke; Tsuchida, Satoshi; Maeda, Takahisa
2010-05-01
With the availability of network enabled sensing devices, the volume of information being collected by networked sensors has increased dramatically in recent years. Over 100 physical, chemical and biological properties can be sensed using in-situ or remote sensing technology. A collection of these sensor nodes forms a sensor network, which is easily deployable to provide a high degree of visibility into real-world physical processes as events unfold. The sensor observation network could allow gathering of diverse types of data at greater spatial and temporal resolution, through the use of wired or wireless network infrastructure, thus real-time or near-real time data from sensor observation network allow researchers and decision-makers to respond speedily to events. However, in the case of environmental monitoring, only a capability to acquire in-situ data periodically is not sufficient but also the management and proper utilization of data also need to be careful consideration. It requires the implementation of database and IT solutions that are robust, scalable and able to interoperate between difference and distributed stakeholders to provide lucid, timely and accurate update to researchers, planners and citizens. The GEO (Global Earth Observation) Grid is primarily aiming at providing an e-Science infrastructure for the earth science community. The GEO Grid is designed to integrate various kinds of data related to the earth observation using the grid technology, which is developed for sharing data, storage, and computational powers of high performance computing, and is accessible as a set of services. A comprehensive web-based system for integrating field sensor and data satellite image based on various open standards of OGC (Open Geospatial Consortium) specifications has been developed. Web Processing Service (WPS), which is most likely the future direction of Web-GIS, performs the computation of spatial data from distributed data sources and returns the outcome in a standard format. The interoperability capabilities and Service Oriented Architecture (SOA) of web services allow incorporating between sensor network measurement available from Sensor Observation Service (SOS) and satellite remote sensing data from Web Mapping Service (WMS) as distributed data sources for WPS. Various applications have been developed to demonstrate the efficacy of integrating heterogeneous data source. For example, the validation of the MODIS aerosol products (MOD08_D3, the Level-3 MODIS Atmosphere Daily Global Product) by ground-based measurements using the sunphotometer (skyradiometer, Prede POM-02) installed at Phenological Eyes Network (PEN) sites in Japan. Furthermore, the web-based framework system for studying a relationship between calculated Vegetation Index from MODIS satellite image surface reflectance (MOD09GA, the Surface Reflectance Daily L2G Global 1km and 500m Product) and Gross Primary Production (GPP) field measurement at flux tower site in Thailand and Japan has been also developed. The success of both applications will contribute to maximize data utilization and improve accuracy of information by validate MODIS satellite products using high degree of accuracy and temporal measurement of field measurement data.
A Wireless Sensor Network for Vineyard Monitoring That Uses Image Processing
Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo
2011-01-01
The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis. PMID:22163948
A wireless sensor network for vineyard monitoring that uses image processing.
Lloret, Jaime; Bosch, Ignacio; Sendra, Sandra; Serrano, Arturo
2011-01-01
The first step to detect when a vineyard has any type of deficiency, pest or disease is to observe its stems, its grapes and/or its leaves. To place a sensor in each leaf of every vineyard is obviously not feasible in terms of cost and deployment. We should thus look for new methods to detect these symptoms precisely and economically. In this paper, we present a wireless sensor network where each sensor node takes images from the field and internally uses image processing techniques to detect any unusual status in the leaves. This symptom could be caused by a deficiency, pest, disease or other harmful agent. When it is detected, the sensor node sends a message to a sink node through the wireless sensor network in order to notify the problem to the farmer. The wireless sensor uses the IEEE 802.11 a/b/g/n standard, which allows connections from large distances in open air. This paper describes the wireless sensor network design, the wireless sensor deployment, how the node processes the images in order to monitor the vineyard, and the sensor network traffic obtained from a test bed performed in a flat vineyard in Spain. Although the system is not able to distinguish between deficiency, pest, disease or other harmful agents, a symptoms image database and a neuronal network could be added in order learn from the experience and provide an accurate problem diagnosis.
Engineering workstation: Sensor modeling
NASA Technical Reports Server (NTRS)
Pavel, M; Sweet, B.
1993-01-01
The purpose of the engineering workstation is to provide an environment for rapid prototyping and evaluation of fusion and image processing algorithms. Ideally, the algorithms are designed to optimize the extraction of information that is useful to a pilot for all phases of flight operations. Successful design of effective fusion algorithms depends on the ability to characterize both the information available from the sensors and the information useful to a pilot. The workstation is comprised of subsystems for simulation of sensor-generated images, image processing, image enhancement, and fusion algorithms. As such, the workstation can be used to implement and evaluate both short-term solutions and long-term solutions. The short-term solutions are being developed to enhance a pilot's situational awareness by providing information in addition to his direct vision. The long term solutions are aimed at the development of complete synthetic vision systems. One of the important functions of the engineering workstation is to simulate the images that would be generated by the sensors. The simulation system is designed to use the graphics modeling and rendering capabilities of various workstations manufactured by Silicon Graphics Inc. The workstation simulates various aspects of the sensor-generated images arising from phenomenology of the sensors. In addition, the workstation can be used to simulate a variety of impairments due to mechanical limitations of the sensor placement and due to the motion of the airplane. Although the simulation is currently not performed in real-time, sequences of individual frames can be processed, stored, and recorded in a video format. In that way, it is possible to examine the appearance of different dynamic sensor-generated and fused images.
Tang, Kea-Tiong; Li, Cheng-Han; Chiu, Shih-Wen
2011-01-01
This study developed an electronic-nose sensor node based on a polymer-coated surface acoustic wave (SAW) sensor array. The sensor node comprised an SAW sensor array, a frequency readout circuit, and an Octopus II wireless module. The sensor array was fabricated on a large K2 128° YX LiNbO3 sensing substrate. On the surface of this substrate, an interdigital transducer (IDT) was produced with a Cr/Au film as its metallic structure. A mixed-mode frequency readout application specific integrated circuit (ASIC) was fabricated using a TSMC 0.18 μm process. The ASIC output was connected to a wireless module to transmit sensor data to a base station for data storage and analysis. This sensor node is applicable for wireless sensor network (WSN) applications. PMID:22163865
Tang, Kea-Tiong; Li, Cheng-Han; Chiu, Shih-Wen
2011-01-01
This study developed an electronic-nose sensor node based on a polymer-coated surface acoustic wave (SAW) sensor array. The sensor node comprised an SAW sensor array, a frequency readout circuit, and an Octopus II wireless module. The sensor array was fabricated on a large K(2) 128° YX LiNbO3 sensing substrate. On the surface of this substrate, an interdigital transducer (IDT) was produced with a Cr/Au film as its metallic structure. A mixed-mode frequency readout application specific integrated circuit (ASIC) was fabricated using a TSMC 0.18 μm process. The ASIC output was connected to a wireless module to transmit sensor data to a base station for data storage and analysis. This sensor node is applicable for wireless sensor network (WSN) applications.
Autonomous chemical and biological miniature wireless-sensor
NASA Astrophysics Data System (ADS)
Goldberg, Bar-Giora
2005-05-01
The presentation discusses a new concept and a paradigm shift in biological, chemical and explosive sensor system design and deployment. From large, heavy, centralized and expensive systems to distributed wireless sensor networks utilizing miniature platforms (nodes) that are lightweight, low cost and wirelessly connected. These new systems are possible due to the emergence and convergence of new innovative radio, imaging, networking and sensor technologies. Miniature integrated radio-sensor networks, is a technology whose time has come. These network systems are based on large numbers of distributed low cost and short-range wireless platforms that sense and process their environment and communicate data thru a network to a command center. The recent emergence of chemical and explosive sensor technology based on silicon nanostructures, coupled with the fast evolution of low-cost CMOS imagers, low power DSP engines and integrated radio chips, has created an opportunity to realize the vision of autonomous wireless networks. These threat detection networks will perform sophisticated analysis at the sensor node and convey alarm information up the command chain. Sensor networks of this type are expected to revolutionize the ability to detect and locate biological, chemical, or explosive threats. The ability to distribute large numbers of low-cost sensors over large areas enables these devices to be close to the targeted threats and therefore improve detection efficiencies and enable rapid counter responses. These sensor networks will be used for homeland security, shipping container monitoring, and other applications such as laboratory medical analysis, drug discovery, automotive, environmental and/or in-vivo monitoring. Avaak"s system concept is to image a chromatic biological, chemical and/or explosive sensor utilizing a digital imager, analyze the images and distribute alarm or image data wirelessly through the network. All the imaging, processing and communications would take place within the miniature, low cost distributed sensor platforms. This concept however presents a significant challenge due to a combination and convergence of required new technologies, as mentioned above. Passive biological and chemical sensors with very high sensitivity and which require no assaying are in development using a technique to optically and chemically encode silicon wafers with tailored nanostructures. The silicon wafer is patterned with nano-structures designed to change colors ad patterns when exposed to the target analytes (TICs, TIMs, VOC). A small video camera detects the color and pattern changes on the sensor. To determine if an alarm condition is present, an on board DSP processor, using specialized image processing algorithms and statistical analysis, determines if color gradient changes occurred on the sensor array. These sensors can detect several agents simultaneously. This system is currently under development by Avaak, with funding from DARPA through an SBIR grant.
Contact CMOS imaging of gaseous oxygen sensor array
Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.
2014-01-01
We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909
Contact CMOS imaging of gaseous oxygen sensor array.
Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V
2011-10-01
We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
High-speed imaging using CMOS image sensor with quasi pixel-wise exposure
NASA Astrophysics Data System (ADS)
Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.
2017-02-01
Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.
Generating Artificial Reference Images for Open Loop Correlation Wavefront Sensors
NASA Astrophysics Data System (ADS)
Townson, M. J.; Love, G. D.; Saunter, C. D.
2018-05-01
Shack-Hartmann wavefront sensors for both solar and laser guide star adaptive optics (with elongated spots) need to observe extended objects. Correlation techniques have been successfully employed to measure the wavefront gradient in solar adaptive optics systems and have been proposed for laser guide star systems. In this paper we describe a method for synthesising reference images for correlation Shack-Hartmann wavefront sensors with a larger field of view than individual sub-apertures. We then show how these supersized reference images can increase the performance of correlation wavefront sensors in regimes where large relative shifts are induced between sub-apertures, such as those observed in open-loop wavefront sensors. The technique we describe requires no external knowledge outside of the wavefront-sensor images, making it available as an entirely "software" upgrade to an existing adaptive optics system. For solar adaptive optics we show the supersized reference images extend the magnitude of shifts which can be accurately measured from 12% to 50% of the field of view of a sub-aperture and in laser guide star wavefront sensors the magnitude of centroids that can be accurately measured is increased from 12% to 25% of the total field of view of the sub-aperture.
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-03-13
With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS) image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET) of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.
Organic-on-silicon complementary metal-oxide-semiconductor colour image sensors.
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-12
Complementary metal-oxide-semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor.
Organic-on-silicon complementary metal–oxide–semiconductor colour image sensors
Lim, Seon-Jeong; Leem, Dong-Seok; Park, Kyung-Bae; Kim, Kyu-Sik; Sul, Sangchul; Na, Kyoungwon; Lee, Gae Hwang; Heo, Chul-Joon; Lee, Kwang-Hee; Bulliard, Xavier; Satoh, Ryu-Ichi; Yagi, Tadao; Ro, Takkyun; Im, Dongmo; Jung, Jungkyu; Lee, Myungwon; Lee, Tae-Yon; Han, Moon Gyu; Jin, Yong Wan; Lee, Sangyoon
2015-01-01
Complementary metal–oxide–semiconductor (CMOS) colour image sensors are representative examples of light-detection devices. To achieve extremely high resolutions, the pixel sizes of the CMOS image sensors must be reduced to less than a micron, which in turn significantly limits the number of photons that can be captured by each pixel using silicon (Si)-based technology (i.e., this reduction in pixel size results in a loss of sensitivity). Here, we demonstrate a novel and efficient method of increasing the sensitivity and resolution of the CMOS image sensors by superposing an organic photodiode (OPD) onto a CMOS circuit with Si photodiodes, which consequently doubles the light-input surface area of each pixel. To realise this concept, we developed organic semiconductor materials with absorption properties selective to green light and successfully fabricated highly efficient green-light-sensitive OPDs without colour filters. We found that such a top light-receiving OPD, which is selective to specific green wavelengths, demonstrates great potential when combined with a newly designed Si-based CMOS circuit containing only blue and red colour filters. To demonstrate the effectiveness of this state-of-the-art hybrid colour image sensor, we acquired a real full-colour image using a camera that contained the organic-on-Si hybrid CMOS colour image sensor. PMID:25578322
NASA Technical Reports Server (NTRS)
Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)
2002-01-01
A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.
Thermal luminescence spectroscopy chemical imaging sensor.
Carrieri, Arthur H; Buican, Tudor N; Roese, Erik S; Sutter, James; Samuels, Alan C
2012-10-01
The authors present a pseudo-active chemical imaging sensor model embodying irradiative transient heating, temperature nonequilibrium thermal luminescence spectroscopy, differential hyperspectral imaging, and artificial neural network technologies integrated together. We elaborate on various optimizations, simulations, and animations of the integrated sensor design and apply it to the terrestrial chemical contamination problem, where the interstitial contaminant compounds of detection interest (analytes) comprise liquid chemical warfare agents, their various derivative condensed phase compounds, and other material of a life-threatening nature. The sensor must measure and process a dynamic pattern of absorptive-emissive middle infrared molecular signature spectra of subject analytes to perform its chemical imaging and standoff detection functions successfully.
Space-based infrared sensors of space target imaging effect analysis
NASA Astrophysics Data System (ADS)
Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang
2018-02-01
Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.
Development of a 750x750 pixels CMOS imager sensor for tracking applications
NASA Astrophysics Data System (ADS)
Larnaudie, Franck; Guardiola, Nicolas; Saint-Pé, Olivier; Vignon, Bruno; Tulet, Michel; Davancens, Robert; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Estribeau, Magali
2017-11-01
Solid-state optical sensors are now commonly used in space applications (navigation cameras, astronomy imagers, tracking sensors...). Although the charge-coupled devices are still widely used, the CMOS image sensor (CIS), which performances are continuously improving, is a strong challenger for Guidance, Navigation and Control (GNC) systems. This paper describes a 750x750 pixels CMOS image sensor that has been specially designed and developed for star tracker and tracking sensor applications. Such detector, that is featuring smart architecture enabling very simple and powerful operations, is built using the AMIS 0.5μm CMOS technology. It contains 750x750 rectangular pixels with 20μm pitch. The geometry of the pixel sensitive zone is optimized for applications based on centroiding measurements. The main feature of this device is the on-chip control and timing function that makes the device operation easier by drastically reducing the number of clocks to be applied. This powerful function allows the user to operate the sensor with high flexibility: measurement of dark level from masked lines, direct access to the windows of interest… A temperature probe is also integrated within the CMOS chip allowing a very precise measurement through the video stream. A complete electro-optical characterization of the sensor has been performed. The major parameters have been evaluated: dark current and its uniformity, read-out noise, conversion gain, Fixed Pattern Noise, Photo Response Non Uniformity, quantum efficiency, Modulation Transfer Function, intra-pixel scanning. The characterization tests are detailed in the paper. Co60 and protons irradiation tests have been also carried out on the image sensor and the results are presented. The specific features of the 750x750 image sensor such as low power CMOS design (3.3V, power consumption<100mW), natural windowing (that allows efficient and robust tracking algorithms), simple proximity electronics (because of the on-chip control and timing function) enabling a high flexibility architecture, make this imager a good candidate for high performance tracking applications.
Obstacles using amorphous materials for volume applications
NASA Astrophysics Data System (ADS)
Kiessling, Albert; Reininger, Thomas
2012-10-01
This contribution is especially focussed on the attempt to use amorphous or nanocrystalline metals in position sensor applications and to describe the difficulties and obstacles encountered in coherence with the development of appropriate industrial high volume series products in conjunction with the related quality requirements. The main motivation to do these investigations was to beat the generally known sensors especially silicon based Hall-sensors as well as AMR- and GMR-sensors - well known from mobile phones and electronic storage devices like hard discs and others - in terms of cost-effectiveness and functionality.
Metal oxide gas sensors on the nanoscale
NASA Astrophysics Data System (ADS)
Plecenik, A.; Haidry, A. A.; Plecenik, T.; Durina, P.; Truchly, M.; Mosko, M.; Grancic, B.; Gregor, M.; Roch, T.; Satrapinskyy, L.; Moskova, A.; Mikula, M.; Kus, P.
2014-06-01
Low cost, low power and highly sensitive gas sensors operating at room temperature are very important devices for controlled hydrogen gas production and storage. One of the disadvantages of chemosensors is their high operating temperature (usually 200 - 400 °C), which excludes such type of sensors from usage in explosive environment. In this report, a new concept of gas chemosensors operating at room temperature based on TiO2 thin films is discussed. Integration of such sensor is fully compatible with sub-100 nm semiconductor technology and could be transferred directly from labor to commercial sphere.
Comparison of the performance of intraoral X-ray sensors using objective image quality assessment.
Hellén-Halme, Kristina; Johansson, Curt; Nilsson, Mats
2016-05-01
The main aim of this study was to evaluate the performance of 10 individual sensors of the same make, using objective measures of key image quality parameters. A further aim was to compare 8 brands of sensors. Ten new sensors of 8 different models from 6 manufacturers (i.e., 80 sensors) were included in the study. All sensors were exposed in a standardized way using an X-ray tube voltage of 60 kVp and different exposure times. Sensor response, noise, low-contrast resolution, spatial resolution and uniformity were measured. Individual differences between sensors of the same brand were surprisingly large in some cases. There were clear differences in the characteristics of the different brands of sensors. The largest variations were found for individual sensor response for some of the brands studied. Also, noise level and low contrast resolution showed large variations between brands. Sensors, even of the same brand, vary significantly in their quality. It is thus valuable to establish action levels for the acceptance of newly delivered sensors and to use objective image quality control for commissioning purposes and periodic checks to ensure high performance of individual digital sensors. Copyright © 2016 Elsevier Inc. All rights reserved.
Extending and expanding the life of older current meters
Strahle, W.J.; Martini, Marinna A.
1995-01-01
The EG&G Model 610 VACM and Model 630 VMCM are standards for ocean current measurements. It is simple to add peripheral sensors to the data stream of the VACM by use of add-on CMOS circuitry. The firmware control of the VMCM makes it virtually impossible to add sampling of additional sensors. Most of the electronic components used in the VACM are obsolete or difficult to replace and the VMCM will soon follow suit. As a result, the USGS joined WHOI in the development of a PCMCIA data storage system to replace the cassette recording system in the VACM. Using the same PCMCIA recording package as the controller and recorder for the VMCM, a user-friendly VMCM is being designed. PCMCIA cards are rapidly becoming an industry standard with a wide range of storage capacities. By upgrading the VACM and VMCM to PCMCIA storage systems with a flexible microprocessor, they will continue to be viable instruments.
The rapid detection of methyl tert-butyl ether (MtBE) in water using a prototype gas sensor system.
de Lacy Costello, B P J; Sivanand, P S; Ratcliffe, N M; Reynolds, D M
2005-01-01
The gasoline additive Methyl-tertiary-Butyl Ether (MtBE) is the second most common contaminant of groundwater in the USA and represents an important soil contaminant. This compound has been detected in the groundwater in at least 27 states as a result of leaking underground storage facilities (gasoline storage tanks and pipelines). Since the health effects of MtBE are unclear the potential threat to drinking water supplies is serious. Therefore, the ability to detect MtBE at low levels (ppb) and on-line at high-risk groundwater sites would be highly desirable. This paper reports the use of 'commercial' and metal oxide sensor arrays for the detection of MtBE in drinking and surface waters at low ppb level (microg.L(-1) range). The output responses from some of the sensors were found to correlate well with MtBE concentrations under laboratory conditions.
NASA Astrophysics Data System (ADS)
Kazakova, L. I.; Sirota, N. P.; Sirota, T. V.; Shabarchina, L. I.
2017-09-01
A fluorescent biosensor is synthesized and described. The biosensor consists of polyelectrolyte microcapsules with glucose oxidase (GOx) entrapped in the cavities and an oxygen-sensitive fluorescent indicator Ru(dpp) immobilized in shells, where Ru(dpp) is tris(4,7-diphenyl-1,10-phenanthroline)ruthenium(II) dichloride. The theoretical activity of the encapsulated GOx and the effect storage time and medium composition have on the stability of sensor microcapsules are determined from polarographic measurements. No change in the activity of the encapsulated enzyme and or its loss to the storage medium are detected over the test period. The dispersion medium (water or a phosphate buffer) are shown to have no effect on the activity of microcapsules with immobilized GOx. The described optical sensor could be used as an alternative to electrochemical sensors for in vitro determination of glucose in the clinically important range of concentrations (up to 10 mmol/L).
Digital imaging technology assessment: Digital document storage project
NASA Technical Reports Server (NTRS)
1989-01-01
An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.
Hand Held Device for Wireless Powering and Interrogation of Biomems Sensors and Actuators
NASA Technical Reports Server (NTRS)
Simons, Rainee N (Inventor); Miranda, Felix Antonio (Inventor)
2007-01-01
A compact, hand-held device for wireless powering, interrogation and data retrieval from at least one implanted sensor. The hand-held device includes an antenna for powering an implanted sensor and for receiving data from the implanted sensor to the hand-held device for at least one of storage, display or analysis. The hand-held device establishes electromagnetic coupling with a low radiating radio frequency power inductor in the implanted sensor at a predefined separation and the antenna geometry allows for the antenna to power, interrogate and retrieve data from the implanted sensor without strapping the hand-held device to a human body housing the implanted sensor The hand-held device optionally allows for activation of the implanted sensor only during interrogation and data retrieval.
Chander, G.; Angal, A.; Choi, T.; Meyer, D.J.; Xiong, X.; Teillet, P.M.
2007-01-01
A cross-calibration methodology has been developed using coincident image pairs from the Terra Moderate Resolution Imaging Spectroradiometer (MODIS), the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Earth Observing EO-1 Advanced Land Imager (ALI) to verify the absolute radiometric calibration accuracy of these sensors with respect to each other. To quantify the effects due to different spectral responses, the Relative Spectral Responses (RSR) of these sensors were studied and compared by developing a set of "figures-of-merit." Seven cloud-free scenes collected over the Railroad Valley Playa, Nevada (RVPN), test site were used to conduct the cross-calibration study. This cross-calibration approach was based on image statistics from near-simultaneous observations made by different satellite sensors. Homogeneous regions of interest (ROI) were selected in the image pairs, and the mean target statistics were converted to absolute units of at-sensor reflectance. Using these reflectances, a set of cross-calibration equations were developed giving a relative gain and bias between the sensor pair.
Toward CMOS image sensor based glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo
2012-09-07
Complementary metal oxide semiconductor (CMOS) image sensor is a powerful tool for biosensing applications. In this present study, CMOS image sensor has been exploited for detecting glucose levels by simple photon count variation with high sensitivity. Various concentrations of glucose (100 mg dL(-1) to 1000 mg dL(-1)) were added onto a simple poly-dimethylsiloxane (PDMS) chip and the oxidation of glucose was catalyzed with the aid of an enzymatic reaction. Oxidized glucose produces a brown color with the help of chromogen during enzymatic reaction and the color density varies with the glucose concentration. Photons pass through the PDMS chip with varying color density and hit the sensor surface. Photon count was recognized by CMOS image sensor depending on the color density with respect to the glucose concentration and it was converted into digital form. By correlating the obtained digital results with glucose concentration it is possible to measure a wide range of blood glucose levels with great linearity based on CMOS image sensor and therefore this technique will promote a convenient point-of-care diagnosis.
Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface.
Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun
2016-06-01
Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging.
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
Large Scale Production of Densified Hydrogen Using Integrated Refrigeration and Storage
NASA Technical Reports Server (NTRS)
Notardonato, William U.; Swanger, Adam Michael; Jumper, Kevin M.; Fesmire, James E.; Tomsik, Thomas M.; Johnson, Wesley L.
2017-01-01
Recent demonstration of advanced liquid hydrogen storage techniques using Integrated Refrigeration and Storage (IRAS) technology at NASA Kennedy Space Center led to the production of large quantities of solid densified liquid and slush hydrogen in a 125,000 L tank. Production of densified hydrogen was performed at three different liquid levels and LH2 temperatures were measured by twenty silicon diode temperature sensors. System energy balances and solid mass fractions are calculated. Experimental data reveal hydrogen temperatures dropped well below the triple point during testing (up to 1 K), and were continuing to trend downward prior to system shutdown. Sub-triple point temperatures were seen to evolve in a time dependent manner along the length of the horizontal, cylindrical vessel. Twenty silicon diode temperature sensors were recorded over approximately one month for testing at two different fill levels (33 67). The phenomenon, observed at both two fill levels, is described and presented detailed and explained herein., and The implications of using IRAS for energy storage, propellant densification, and future cryofuel systems are discussed.
Binary CMOS image sensor with a gate/body-tied MOSFET-type photodetector for high-speed operation
NASA Astrophysics Data System (ADS)
Choi, Byoung-Soo; Jo, Sung-Hyun; Bae, Myunghan; Kim, Sang-Hwan; Shin, Jang-Kyoo
2016-05-01
In this paper, a binary complementary metal oxide semiconductor (CMOS) image sensor with a gate/body-tied (GBT) metal oxide semiconductor field effect transistor (MOSFET)-type photodetector is presented. The sensitivity of the GBT MOSFET-type photodetector, which was fabricated using the standard CMOS 0.35-μm process, is higher than the sensitivity of the p-n junction photodiode, because the output signal of the photodetector is amplified by the MOSFET. A binary image sensor becomes more efficient when using this photodetector. Lower power consumptions and higher speeds of operation are possible, compared to the conventional image sensors using multi-bit analog to digital converters (ADCs). The frame rate of the proposed image sensor is over 2000 frames per second, which is higher than those of the conventional CMOS image sensors. The output signal of an active pixel sensor is applied to a comparator and compared with a reference level. The 1-bit output data of the binary process is determined by this level. To obtain a video signal, the 1-bit output data is stored in the memory and is read out by horizontal scanning. The proposed chip is composed of a GBT pixel array (144 × 100), binary-process circuit, vertical scanner, horizontal scanner, and readout circuit. The operation mode can be selected from between binary mode and multi-bit mode.
Nölte, I; Gorbey, S; Boll, H; Figueiredo, G; Groden, C; Lemmer, B; Brockmann, M A
2011-12-01
Radiotelemetric sensors for in vivo assessment of blood pressure and heart rate are widely used in animal research. MRI with implanted sensors is regarded as contraindicated as transmitter malfunction and injury of the animal may be caused. Moreover, artefacts are expected to compromise image evaluation. In vitro, the function of a radiotelemetric sensor (TA11PA-C10, Data Sciences International) after exposure to MRI up to 9.4 T was assessed. The magnetic force of the electromagnetic field on the sensor as well as radiofrequency (RF)-induced sensor heating was analysed. Finally, MRI with an implanted sensor was performed in a rat. Imaging artefacts were analysed at 3.0 and 9.4 T ex vivo and in vivo. Transmitted 24 h blood pressure and heart rate were compared before and after MRI to verify the integrity of the telemetric sensor. The function of the sensor was not altered by MRI up to 9.4 T. The maximum force exerted on the sensor was 273 ± 50 mN. RF-induced heating was ruled out. Artefacts impeded the assessment of the abdomen and thorax in a dead rat, but not of the head and neck. MRI with implanted radiotelemetric sensors is feasible in principal. The tested sensor maintains functionality up to 9.4 T. Artefacts hampered abdominal and throacic imaging in rats, while assessment of the head and neck is possible.
A comparative study of wireless sensor networks and their routing protocols.
Bhattacharyya, Debnath; Kim, Tai-hoon; Pal, Subhajit
2010-01-01
Recent developments in the area of micro-sensor devices have accelerated advances in the sensor networks field leading to many new protocols specifically designed for wireless sensor networks (WSNs). Wireless sensor networks with hundreds to thousands of sensor nodes can gather information from an unattended location and transmit the gathered data to a particular user, depending on the application. These sensor nodes have some constraints due to their limited energy, storage capacity and computing power. Data are routed from one node to other using different routing protocols. There are a number of routing protocols for wireless sensor networks. In this review article, we discuss the architecture of wireless sensor networks. Further, we categorize the routing protocols according to some key factors and summarize their mode of operation. Finally, we provide a comparative study on these various protocols.
Palaniyandi, P; Rangarajan, Govindan
2017-08-21
We propose a mathematical model for storage and recall of images using coupled maps. We start by theoretically investigating targeted synchronization in coupled map systems wherein only a desired (partial) subset of the maps is made to synchronize. A simple method is introduced to specify coupling coefficients such that targeted synchronization is ensured. The principle of this method is extended to storage/recall of images using coupled Rulkov maps. The process of adjusting coupling coefficients between Rulkov maps (often used to model neurons) for the purpose of storing a desired image mimics the process of adjusting synaptic strengths between neurons to store memories. Our method uses both synchronisation and synaptic weight modification, as the human brain is thought to do. The stored image can be recalled by providing an initial random pattern to the dynamical system. The storage and recall of the standard image of Lena is explicitly demonstrated.
Low-cost compact thermal imaging sensors for body temperature measurement
NASA Astrophysics Data System (ADS)
Han, Myung-Soo; Han, Seok Man; Kim, Hyo Jin; Shin, Jae Chul; Ahn, Mi Sook; Kim, Hyung Won; Han, Yong Hee
2013-06-01
This paper presents a 32x32 microbolometer thermal imaging sensor for human body temperature measurement. Waferlevel vacuum packaging technology allows us to get a low cost and compact imaging sensor chip. The microbolometer uses V-W-O film as sensing material and ROIC has been designed 0.35-um CMOS process in UMC. A thermal image of a human face and a hand using f/1 lens convinces that it has a potential of human body temperature for commercial use.
Validation Test Report for the Automated Optical Processing System (AOPS) Version 4.12
2015-09-03
NPP) with the VIIRS sensor package as well as data from the Geostationary Ocean Color Imager (GOCI) sensor, aboard the Communication Ocean and...capability • Prepare the NRT Geostationary Ocean Color Imager (GOCI) data stream for integration into operations. • Improvements in sensor...Navy (DON) Environmental Data Records (EDRs) Expeditionary Warfare (EXW) Geostationary Ocean Color Imager (GOCI) Gulf of Mexico (GOM) Hierarchical
NASA Astrophysics Data System (ADS)
Champagne, C.; Wang, S.; Liu, J.; Hadwen, T. A.
2017-12-01
Drought is a complex natural disaster, which often emerges slowly, but can occur at various time scales and have impacts that are not well understood. Long term observations of drought intensity and frequency are often quantified from precipitation and temperature based indices or modelled estimates of soil water storage. The maturity of satellite based observations has created the potential to enhance the understanding of drought and drought impacts, particularly in regions where traditional data sets are limited by remoteness or inaccessibility, and where drought processes are not well-quantified by models. Long term global satellite data records now provide observations of key hydrological variables, including evaporation modelled from thermal sensors, soil moisture from microwave sensors, ground water from gravity sensors and vegetation condition that can be modelled from optical sensors. This study examined trends in drought frequency, intensity and duration over diverse ecoregions in Canada, including agricultural, grassland, forested and wetland areas. Trends in drought were obtained from the Canadian Drought Monitor as well as meteorological based indices from weather stations, and evaluated against satellite derived information on evaporative stress (Anderson et al. 2011), soil moisture (Champagne et al. 2015), terrestrial water storage (Wang and Li 2016) and vegetation condition (Davidson et al. 2009). Data sets were evaluated to determine differences in how different sensors characterize the hydrology and impacts of drought events from 2003 to 2016. Preliminary results show how different hydrological observations can provide unique information that can tie causes of drought (water shortages resulting from precipitation, lack of moisture storage or evaporative stress) to impacts (vegetation condition) that hold the potential to improve the understanding and classification of drought events.
Uncooled microbolometer sensors for unattended applications
NASA Astrophysics Data System (ADS)
Kohin, Margaret; Miller, James E.; Leary, Arthur R.; Backer, Brian S.; Swift, William; Aston, Peter
2003-09-01
BAE SYSTEMS has been developing and producing uncooled microbolometer sensors since 1995. Recently, uncooled sensors have been used on Pointer Unattended Aerial Vehicles and considered for several unattended sensor applications including DARPA Micro-Internetted Unattended Ground Sensors (MIUGS), Army Modular Acoustic Imaging Sensors (MAIS), and Redeployable Unattended Ground Sensors (R-UGS). This paper describes recent breakthrough uncooled sensor performance at BAE SYSTEMS and how this improved performance has been applied to a new Standard Camera Core (SCC) that is ideal for these unattended applications. Video imagery from a BAE SYSTEMS 640x480 imaging camera flown in a Pointer UAV is provided. Recent performance results are also provided.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Carl, C.; Goss, W.; Hansen, G. R.; Olsasky, M. J.; Johnston, A. R.
1978-01-01
An integrated sensor for traffic surveillance on mainline sections of urban freeways is described. Applicable imaging and processor technology is surveyed and the functional requirements for the sensors and the conceptual design of the breadboard sensors are given. Parameters measured by the sensors include lane density, speed, and volume. The freeway image is also used for incident diagnosis.
Advanced scanners and imaging systems for earth observations. [conferences
NASA Technical Reports Server (NTRS)
1973-01-01
Assessments of present and future sensors and sensor related technology are reported along with a description of user needs and applications. Five areas are outlined: (1) electromechanical scanners, (2) self-scanned solid state sensors, (3) electron beam imagers, (4) sensor related technology, and (5) user applications. Recommendations, charts, system designs, technical approaches, and bibliographies are included for each area.
Landsat and Thermal Infrared Imaging
NASA Technical Reports Server (NTRS)
Arvidson, Terry; Barsi, Julia; Jhabvala, Murzy; Reuter, Dennis
2012-01-01
The purpose of this chapter is to describe the collection of thermal images by Landsat sensors already on orbit and to introduce the new thermal sensor to be launched in 2013. The chapter describes the thematic mapper (TM) and enhanced thematic mapper plus (ETM+) sensors, the calibration of their thermal bands, and the design and prelaunch calibration of the new thermal infrared sensor (TIRS).
High-Sensitivity Fiber-Optic Ultrasound Sensors for Medical Imaging Applications
Wen, H.; Wiesler, D.G.; Tveten, A.; Danver, B.; Dandridge, A.
2010-01-01
This paper presents several designs of high-sensitivity, compact fiber-optic ultrasound sensors that may be used for medical imaging applications. These sensors translate ultrasonic pulses into strains in single-mode optical fibers, which are measured with fiber-based laser interferometers at high precision. The sensors are simpler and less expensive to make than piezoelectric sensors, and are not susceptible to electromagnetic interference. It is possible to make focal sensors with these designs, and several schemes are discussed. Because of the minimum bending radius of optical fibers, the designs are suitable for single element sensors rather than for arrays. PMID:9691368
Reconstruction of an acoustic pressure field in a resonance tube by particle image velocimetry.
Kuzuu, K; Hasegawa, S
2015-11-01
A technique for estimating an acoustic field in a resonance tube is suggested. The estimation of an acoustic field in a resonance tube is important for the development of the thermoacoustic engine, and can be conducted employing two sensors to measure pressure. While this measurement technique is known as the two-sensor method, care needs to be taken with the location of pressure sensors when conducting pressure measurements. In the present study, particle image velocimetry (PIV) is employed instead of a pressure measurement by a sensor, and two-dimensional velocity vector images are extracted as sequential data from only a one- time recording made by a video camera of PIV. The spatial velocity amplitude is obtained from those images, and a pressure distribution is calculated from velocity amplitudes at two points by extending the equations derived for the two-sensor method. By means of this method, problems relating to the locations and calibrations of multiple pressure sensors are avoided. Furthermore, to verify the accuracy of the present method, the experiments are conducted employing the conventional two-sensor method and laser Doppler velocimetry (LDV). Then, results by the proposed method are compared with those obtained with the two-sensor method and LDV.
Radiometric characterization of hyperspectral imagers using multispectral sensors
NASA Astrophysics Data System (ADS)
McCorkel, Joel; Thome, Kurt; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-08-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (MODIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of MODIS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most bands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Radiometric Characterization of Hyperspectral Imagers using Multispectral Sensors
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Kurt, Thome; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-01-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these test sites are not always successful due to weather and funding availability. Therefore, RSG has also automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor, This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral a imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (M0DIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of M0DlS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most brands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Multispectral image fusion for detecting land mines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G.A.; Sengupta, S.K.; Aimonetti, W.D.
1995-04-01
This report details a system which fuses information contained in registered images from multiple sensors to reduce the effects of clutter and improve the ability to detect surface and buried land mines. The sensor suite currently consists of a camera that acquires images in six bands (400nm, 500nm, 600nm, 700nm, 800nm and 900nm). Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite ofmore » sensors detects a variety of physical properties that are more separable in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, etc.) and some artifacts.« less
Matsuba, Sota; Kato, Ryo; Okumura, Koichi; Sawada, Kazuaki; Hattori, Toshiaki
2018-01-01
In biochemistry, Ca 2+ and K + play essential roles to control signal transduction. Much interest has been focused on ion-imaging, which facilitates understanding of their ion flux dynamics. In this paper, we report a calcium and potassium multi-ion image sensor and its application to living cells (PC12). The multi-ion sensor had two selective plasticized poly(vinyl chloride) membranes containing ionophores. Each region on the sensor responded to only the corresponding ion. The multi-ion sensor has many advantages including not only label-free and real-time measurement but also simultaneous detection of Ca 2+ and K + . Cultured PC12 cells treated with nerve growth factor were prepared, and a practical observation for the cells was conducted with the sensor. After the PC12 cells were stimulated by acetylcholine, only the extracellular Ca 2+ concentration increased while there was no increase in the extracellular K + concentration. Through the practical observation, we demonstrated that the sensor was helpful for analyzing the cell events with changing Ca 2+ and/or K + concentration.
Image compression using singular value decomposition
NASA Astrophysics Data System (ADS)
Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.
2017-11-01
We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Alaska SAR Facility mass storage, current system
NASA Technical Reports Server (NTRS)
Cuddy, David; Chu, Eugene; Bicknell, Tom
1993-01-01
This paper examines the mass storage systems that are currently in place at the Alaska SAR Facility (SAF). The architecture of the facility will be presented including specifications of the mass storage media that are currently used and the performances that we have realized from the various media. The distribution formats and media are also discussed. Because the facility is expected to service future sensors, the new requirements and possible solutions to these requirements are also discussed.
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald
2012-01-01
An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zujun, E-mail: wangzujun@nint.ac.cn; Huang, Shaoyan; Liu, Minbo
The experiments of displacement damage effects on CMOS APS image sensors induced by neutron irradiation from a nuclear reactor are presented. The CMOS APS image sensors are manufactured in the standard 0.35 μm CMOS technology. The flux of neutron beams was about 1.33 × 10{sup 8} n/cm{sup 2}s. The three samples were exposed by 1 MeV neutron equivalent-fluence of 1 × 10{sup 11}, 5 × 10{sup 11}, and 1 × 10{sup 12} n/cm{sup 2}, respectively. The mean dark signal (K{sub D}), dark signal spike, dark signal non-uniformity (DSNU), noise (V{sub N}), saturation output signal voltage (V{sub S}), and dynamic rangemore » (DR) versus neutron fluence are investigated. The degradation mechanisms of CMOS APS image sensors are analyzed. The mean dark signal increase due to neutron displacement damage appears to be proportional to displacement damage dose. The dark images from CMOS APS image sensors irradiated by neutrons are presented to investigate the generation of dark signal spike.« less
Double-image storage optimized by cross-phase modulation in a cold atomic system
NASA Astrophysics Data System (ADS)
Qiu, Tianhui; Xie, Min
2017-09-01
A tripod-type cold atomic system driven by double-probe fields and a coupling field is explored to store double images based on the electromagnetically induced transparency (EIT). During the storage time, an intensity-dependent signal field is applied further to extend the system with the fifth level involved, then the cross-phase modulation is introduced for coherently manipulating the stored images. Both analytical analysis and numerical simulation clearly demonstrate a tunable phase shift with low nonlinear absorption can be imprinted on the stored images, which effectively can improve the visibility of the reconstructed images. The phase shift and the energy retrieving rate of the probe fields are immune to the coupling intensity and the atomic optical density. The proposed scheme can easily be extended to the simultaneous storage of multiple images. This work may be exploited toward the end of EIT-based multiple-image storage devices for all-optical classical and quantum information processings.
NASA Astrophysics Data System (ADS)
Schmidt-Hattenberger, C.; Weiner, M.; Liebscher, A.; Spangenberg, E.
2009-04-01
A fiber optic refractive index sensor is tested for continuous monitoring of fluid-fluid and fluid-gas interactions within the frame of laboratory investigations of CO2 storage, monitoring and safety technology research (COSMOS project, "Geotechnologien" program). The sensor bases on a Fabry-Perot white light interferometer technique, where the refractive index (RI) of the solution under investigation is measured by variation of the liquid-filled Fabry-Perot optical cavity length. Such sensor system is typically used for measuring and controlling oil composition and also fluid quality. The aim of this study is to test the application of the fiber optic refractive index sensor for monitoring the CO2 dissolution in formation fluids (brine, oil, gas) of CO2 storage sites. Monitoring and knowledge of quantity and especially rate of CO2 dissolution in the formation fluid is important for any assessment of long-term risks of CO2 storage sites. It is also a prerequisite for any precise reservoir modelling. As a first step we performed laboratory experiments in standard autoclaves on a variety of different fluids and fluid mixtures (technical alcohols, pure water, CO2, synthetic brines, natural formation brine from the Ketzin test site). The RI measurements are partly combined with default electrical conductivity and sonic velocity measurements. The fiber optic refractive index sensor system allows for RI measurements within the range 1.0000 to 1.7000 RI with a resolution of approximately 0.0001 RI. For simple binary fluid mixtures first results indicate linear relationships between refractive indices and fluid composition. Within the pressure range investigated (up to 60 bar) the data suggest only minor changes of RI with pressure. Further, planned experiments will focus on the determination of i) the temperature dependency of RI, ii) the combined effects of pressure and temperature on RI, and finally iii) the kinetics of CO2 dissolution in realistic formation fluids.
Commercial Sensory Survey Radiation Testing Progress Report
NASA Technical Reports Server (NTRS)
Becker, Heidi N.; Dolphic, Michael D.; Thorbourn, Dennis O.; Alexander, James W.; Salomon, Phil M.
2008-01-01
The NASA Electronic Parts and Packaging (NEPP) Program Sensor Technology Commercial Sensor Survey task is geared toward benefiting future NASA space missions with low-cost, short-duty-cycle, visible imaging needs. Such applications could include imaging for educational outreach purposes or short surveys of spacecraft, planetary, or lunar surfaces. Under the task, inexpensive commercial grade CMOS sensors were surveyed in fiscal year 2007 (FY07) and three sensors were selected for total ionizing dose (TID) and displacement damage dose (DDD) tolerance testing. The selected sensors had to meet selection criteria chosen to support small, low-mass cameras that produce good resolution color images. These criteria are discussed in detail in [1]. This document discusses the progress of radiation testing on the Micron and OmniVision sensors selected in FY07 for radiation tolerance testing.
A Recommender System in the Cyber Defense Domain
2014-03-27
monitoring software is a java based program sending updates to the database on the sensor machine. The host monitoring program gathers information about...3.2.2 Database. A MySQL database located on the sensor machine acts as the storage for the sensors on the network. Snort, Nmap, vulnerability scores, and...machine with the IDS and the recommender is labeled “sensor”. The recommender system code is written in java and compiled using java version 1.6.024
A Sensitive Measurement for Estimating Impressions of Image-Contents
NASA Astrophysics Data System (ADS)
Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao
We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.
Low noise WDR ROIC for InGaAs SWIR image sensor
NASA Astrophysics Data System (ADS)
Ni, Yang
2017-11-01
Hybridized image sensors are actually the only solution for image sensing beyond the spectral response of silicon devices. By hybridization, we can combine the best sensing material and photo-detector design with high performance CMOS readout circuitry. In the infrared band, we are facing typically 2 configurations: high background situation and low background situation. The performance of high background sensors are conditioned mainly by the integration capacity in each pixel which is the case for mid-wave and long-wave infrared detectors. For low background situation, the detector's performance is mainly limited by the pixel's noise performance which is conditioned by dark signal and readout noise. In the case of reflection based imaging condition, the pixel's dynamic range is also an important parameter. This is the case for SWIR band imaging. We are particularly interested by InGaAs based SWIR image sensors.
NASA Astrophysics Data System (ADS)
Takehara, Hironari; Miyazawa, Kazuya; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Kim, Soo Hyeon; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun
2014-01-01
A CMOS image sensor with stacked photodiodes was fabricated using 0.18 µm mixed signal CMOS process technology. Two photodiodes were stacked at the same position of each pixel of the CMOS image sensor. The stacked photodiodes consist of shallow high-concentration N-type layer (N+), P-type well (PW), deep N-type well (DNW), and P-type substrate (P-sub). PW and P-sub were shorted to ground. By monitoring the voltage of N+ and DNW individually, we can observe two monochromatic colors simultaneously without using any color filters. The CMOS image sensor is suitable for fluorescence imaging, especially contact imaging such as a lensless observation system of digital enzyme-linked immunosorbent assay (ELISA). Since the fluorescence increases with time in digital ELISA, it is possible to observe fluorescence accurately by calculating the difference from the initial relation between the pixel values for both photodiodes.
Low-voltage 96 dB snapshot CMOS image sensor with 4.5 nW power dissipation per pixel.
Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander
2012-01-01
Modern "smart" CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage "smart" image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.
Low-Voltage 96 dB Snapshot CMOS Image Sensor with 4.5 nW Power Dissipation per Pixel
Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander
2012-01-01
Modern “smart” CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage “smart” image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel. PMID:23112588
Yield variability prediction by remote sensing sensors with different spatial resolution
NASA Astrophysics Data System (ADS)
Kumhálová, Jitka; Matějková, Štěpánka
2017-04-01
Currently, remote sensing sensors are very popular for crop monitoring and yield prediction. This paper describes how satellite images with moderate (Landsat satellite data) and very high (QuickBird and WorldView-2 satellite data) spatial resolution, together with GreenSeeker hand held crop sensor, can be used to estimate yield and crop growth variability. Winter barley (2007 and 2015) and winter wheat (2009 and 2011) were chosen because of cloud-free data availability in the same time period for experimental field from Landsat satellite images and QuickBird or WorldView-2 images. Very high spatial resolution images were resampled to worse spatial resolution. Normalised difference vegetation index was derived from each satellite image data sets and it was also measured with GreenSeeker handheld crop sensor for the year 2015 only. Results showed that each satellite image data set can be used for yield and plant variability estimation. Nevertheless, better results, in comparison with crop yield, were obtained for images acquired in later phenological phases, e.g. in 2007 - BBCH 59 - average correlation coefficient 0.856, and in 2011 - BBCH 59-0.784. GreenSeeker handheld crop sensor was not suitable for yield estimation due to different measuring method.
Electric Potential and Electric Field Imaging with Applications
NASA Technical Reports Server (NTRS)
Generazio, Ed
2016-01-01
The technology and techniques for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field may be used for (illuminating) volumes to be inspected with EFI. The baseline sensor technology, electric field sensor (e-sensor), and its construction, optional electric field generation (quasistatic generator), and current e-sensor enhancements (ephemeral e-sensor) are discussed. Demonstrations for structural, electronic, human, and memory applications are shown. This new EFI capability is demonstrated to reveal characterization of electric charge distribution, creating a new field of study that embraces areas of interest including electrostatic discharge mitigation, crime scene forensics, design and materials selection for advanced sensors, dielectric morphology of structures, inspection of containers, inspection for hidden objects, tether integrity, organic molecular memory, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.
CMOS image sensor for detection of interferon gamma protein interaction as a point-of-care approach.
Marimuthu, Mohana; Kandasamy, Karthikeyan; Ahn, Chang Geun; Sung, Gun Yong; Kim, Min-Gon; Kim, Sanghyo
2011-09-01
Complementary metal oxide semiconductor (CMOS)-based image sensors have received increased attention owing to the possibility of incorporating them into portable diagnostic devices. The present research examined the efficiency and sensitivity of a CMOS image sensor for the detection of antigen-antibody interactions involving interferon gamma protein without the aid of expensive instruments. The highest detection sensitivity of about 1 fg/ml primary antibody was achieved simply by a transmission mechanism. When photons are prevented from hitting the sensor surface, a reduction in digital output occurs in which the number of photons hitting the sensor surface is approximately proportional to the digital number. Nanoscale variation in substrate thickness after protein binding can be detected with high sensitivity by the CMOS image sensor. Therefore, this technique can be easily applied to smartphones or any clinical diagnostic devices for the detection of several biological entities, with high impact on the development of point-of-care applications.
Effective Fingerprint Quality Estimation for Diverse Capture Sensors
Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun
2010-01-01
Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-01-01
With their significant features, the applications of complementary metal-oxide semiconductor (CMOS) image sensors covers a very extensive range, from industrial automation to traffic applications such as aiming systems, blind guidance, active/passive range finders, etc. In this paper CMOS image sensor-based active and passive range finders are presented. The measurement scheme of the proposed active/passive range finders is based on a simple triangulation method. The designed range finders chiefly consist of a CMOS image sensor and some light sources such as lasers or LEDs. The implementation cost of our range finders is quite low. Image processing software to adjust the exposure time (ET) of the CMOS image sensor to enhance the performance of triangulation-based range finders was also developed. An extensive series of experiments were conducted to evaluate the performance of the designed range finders. From the experimental results, the distance measurement resolutions achieved by the active range finder and the passive range finder can be better than 0.6% and 0.25% within the measurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests on applications of the developed CMOS image sensor-based range finders to the automotive field were also conducted. The experimental results demonstrated that our range finders are well-suited for distance measurements in this field. PMID:27879789
NASA Technical Reports Server (NTRS)
Ong, Cindy; Mueller, Andreas; Thome, Kurtis; Pierce, Leland E.; Malthus, Timothy
2016-01-01
Calibration is the process of quantitatively defining a system's responses to known, controlled signal inputs, and validation is the process of assessing, by independent means, the quality of the data products derived from those system outputs [1]. Similar to other Earth observation (EO) sensors, the calibration and validation of spaceborne imaging spectroscopy sensors is a fundamental underpinning activity. Calibration and validation determine the quality and integrity of the data provided by spaceborne imaging spectroscopy sensors and have enormous downstream impacts on the accuracy and reliability of products generated from these sensors. At least five imaging spectroscopy satellites are planned to be launched within the next five years, with the two most advanced scheduled to be launched in the next two years [2]. The launch of these sensors requires the establishment of suitable, standardized, and harmonized calibration and validation strategies to ensure that high-quality data are acquired and comparable between these sensor systems. Such activities are extremely important for the community of imaging spectroscopy users. Recognizing the need to focus on this underpinning topic, the Geoscience Spaceborne Imaging Spectroscopy (previously, the International Spaceborne Imaging Spectroscopy) Technical Committee launched a calibration and validation initiative at the 2013 International Geoscience and Remote Sensing Symposium (IGARSS) in Melbourne, Australia, and a post-conference activity of a vicarious calibration field trip at Lake Lefroy in Western Australia.
CMOS image sensors as an efficient platform for glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo; Choi, Cheol Soo
2013-10-07
Complementary metal oxide semiconductor (CMOS) image sensors have been used previously in the analysis of biological samples. In the present study, a CMOS image sensor was used to monitor the concentration of oxidized mouse plasma glucose (86-322 mg dL(-1)) based on photon count variation. Measurement of the concentration of oxidized glucose was dependent on changes in color intensity; color intensity increased with increasing glucose concentration. The high color density of glucose highly prevented photons from passing through the polydimethylsiloxane (PDMS) chip, which suggests that the photon count was altered by color intensity. Photons were detected by a photodiode in the CMOS image sensor and converted to digital numbers by an analog to digital converter (ADC). Additionally, UV-spectral analysis and time-dependent photon analysis proved the efficiency of the detection system. This simple, effective, and consistent method for glucose measurement shows that CMOS image sensors are efficient devices for monitoring glucose in point-of-care applications.
Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin
2015-03-24
A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.
Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.
He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P
2013-09-18
The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.
Avrin, D E; Andriole, K P; Yin, L; Gould, R G; Arenson, R L
2001-03-01
A hierarchical storage management (HSM) scheme for cost-effective on-line archival of image data using lossy compression is described. This HSM scheme also provides an off-site tape backup mechanism and disaster recovery. The full-resolution image data are viewed originally for primary diagnosis, then losslessly compressed and sent off site to a tape backup archive. In addition, the original data are wavelet lossy compressed (at approximately 25:1 for computed radiography, 10:1 for computed tomography, and 5:1 for magnetic resonance) and stored on a large RAID device for maximum cost-effective, on-line storage and immediate retrieval of images for review and comparison. This HSM scheme provides a solution to 4 problems in image archiving, namely cost-effective on-line storage, disaster recovery of data, off-site tape backup for the legal record, and maximum intermediate storage and retrieval through the use of on-site lossy compression.
Highly curved image sensors: a practical approach for improved optical performance
NASA Astrophysics Data System (ADS)
Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff
2017-06-01
The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30$^\\circ$ subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.
Advancing the capabilities of reservoir remote sensing by leveraging multi-source satellite data
NASA Astrophysics Data System (ADS)
Gao, H.; Zhang, S.; Zhao, G.; Li, Y.
2017-12-01
With a total global capacity of more than 6000 km3, reservoirs play a key role in the hydrological cycle and in water resources management. However, essential reservoir data (e.g., elevation, storage, and evaporation loss) are usually not shared at a large scale. While satellite remote sensing offers a unique opportunity for monitoring large reservoirs from space, the commonly used radar altimeters can only detect storage variations of about 15% of global lakes at a repeat period of 10 days or longer. To advance the capabilities of reservoir sensing, we developed a series of algorithms geared towards generating long term reservoir records at improved spatial coverage, and at improved temporal resolution. To this goal, observations are leveraged from multiple satellite sensors, which include radar/laser altimeters, imagers, and passive microwave radiometers. In South Asia, we demonstrate that reservoir storage can be estimated under all-weather conditions at a 4 day time step, with the total capacity of monitored reservoirs increased to 45%. Within the Continuous United States, a first Landsat based evaporation loss dataset was developed (containing 204 reservoirs) from 1984 to 2011. The evaporation trends of these reservoirs are identified and the causes are analyzed. All of these algorithms and products were validated with gauge observations. Future satellite missions, which will make significant contributions to monitoring global reservoirs, are also discussed.
Image Processing for Cameras with Fiber Bundle Image Relay
length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors . However, such fiber-coupled imaging systems...coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image...vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with
Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle
NASA Astrophysics Data System (ADS)
Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon
2018-03-01
Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Wick, Gary; Bosilovich, Michael G.
2005-01-01
Remote sensing methodologies for turbulent heat fluxes over oceans depend on driving bulk formulations of fluxes with measured surface winds and estimated near surface thermodynamics from microwave sensors of the Special Sensor Microwave Imager (SSM/I) heritage. We will review recent work with a number of SSM/I-based algorithms and investigate the ability of current data sets to document global, tropical ocean-averaged evaporation changes in association with El Nino and La Nina SST changes. We show that in addition to interannual signals, latent heat flux increases over the period since late 1987 range from approx. .1 to .6 mm/ day are present; these represent trends 2 to 3 times larger than the NCEP Reanalysis. Since atmospheric storage cannot account for the difference, and since compensating evapotranspiration changes over land are highly unlikely to be this large, these evaporation estimates cannot be reconciled with ocean precipitation records such as those produced by the Global Precipitation Climatology Project, GPCP. The reasons for the disagreement include less than adequate intercalibration between SSM/I sensors providing winds and water vapor for driving the algorithms, biases due to the assumption that column integrated water vapor mirrors near surface water vapor variations, and other factors as well. The reanalyses have their own problems with spin-up during assimilation, lack of constraining input data at the ocean surface, and amplitude of synoptic transients.
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-01-01
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-12-15
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.
Design Considerations For Imaging Charge-Coupled Device (ICCD) Star Sensors
NASA Astrophysics Data System (ADS)
McAloon, K. J.
1981-04-01
A development program is currently underway to produce a precision star sensor using imaging charge coupled device (ICCD) technology. The effort is the critical component development phase for the Air Force Multi-Mission Attitude Determination and Autonomous Navigation System (MADAN). A number of unique considerations have evolved in designing an arcsecond accuracy sensor around an ICCD detector. Three tiers of performance criteria are involved: at the spacecraft attitude determination system level, at the star sensor level, and at the detector level. Optimum attitude determination system performance involves a tradeoff between Kalman filter iteration time and sensor ICCD integration time. The ICCD star sensor lends itself to the use of a new approach in the functional interface between the attitude determination system and the sensor. At the sensor level image data processing tradeoffs are important for optimum sensor performance. These tradeoffs involve the sensor optic configuration, the optical point spread function (PSF) size and shape, the PSF position locator, and the microprocessor locator algorithm. Performance modelling of the sensor mandates the use of computer simulation programs. Five key performance parameters at the ICCD detector level are defined. ICCD error characteristics have also been isolated to five key parameters.
An automated miniature robotic vehicle inspection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobie, Gordon; Summan, Rahul; MacLeod, Charles
2014-02-18
A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3Dmore » model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.« less
Future opportunities for advancing glucose test device electronics.
Young, Brian R; Young, Teresa L; Joyce, Margaret K; Kennedy, Spencer I; Atashbar, Massood Z
2011-09-01
Advancements in the field of printed electronics can be applied to the field of diabetes testing. A brief history and some new developments in printed electronics components applicable to personal test devices, including circuitry, batteries, transmission devices, displays, and sensors, are presented. Low-cost, thin, and lightweight materials containing printed circuits with energy storage or harvest capability and reactive/display centers, made using new printing/imaging technologies, are ideal for incorporation into personal-use medical devices such as glucose test meters. Semicontinuous rotogravure printing, which utilizes flexible substrates and polymeric, metallic, and/or nano "ink" composite materials to effect rapidly produced, lower-cost printed electronics, is showing promise. Continuing research advancing substrate, "ink," and continuous processing development presents the opportunity for research collaboration with medical device designers. © 2011 Diabetes Technology Society.
Thermal Characterization of Carbon Nanotubes by Photothermal Techniques
NASA Astrophysics Data System (ADS)
Leahu, G.; Li Voti, R.; Larciprete, M. C.; Sibilia, C.; Bertolotti, M.; Nefedov, I.; Anoshkin, I. V.
2015-06-01
Carbon nanotubes (CNTs) are multifunctional materials commonly used in a large number of applications in electronics, sensors, nanocomposites, thermal management, actuators, energy storage and conversion, and drug delivery. Despite recent important advances in the development of CNT purity assessment tools and atomic resolution imaging of individual nanotubes by scanning tunnelling microscopy and high-resolution transmission electron microscopy, the macroscale assessment of the overall surface qualities of commercial CNT materials remains a great challenge. The lack of quantitative measurement technology to characterize and compare the surface qualities of bulk manufactured and engineered CNT materials has negative impacts on the reliable and consistent nanomanufacturing of CNT products. In this paper it is shown how photoacoustic spectroscopy and photothermal radiometry represent useful non-destructive tools to study the optothermal properties of carbon nanotube thin films.
Mousty, Christine; Leroux, Fabrice
2012-11-01
From an exhaustive overview based on applicative academic literature and patent domain, the relevance of Layered Double Hydroxide (LDHs) as electrode materials for electrochemical detection of organic molecules having environmental or health impact and energy storage is evaluated. Specifically the focus is driven on their application as supercapacitor, alkaline or lithium battery and (bio)-sensor. Inherent to the high versatility of their chemical composition, charge density, anion exchange capability, LDH-based materials are extensively studied and their performances for such applications are reported. Indeed the analytical characteristics (sensitivity and detection limit) of LDH-based electrodes are scrutinized, and their specific capacity or capacitance as electrode battery or supercapacitor materials, are detailed.
NASA Technical Reports Server (NTRS)
Oneil, William F.
1993-01-01
The fusion of radar and electro-optic (E-O) sensor images presents unique challenges. The two sensors measure different properties of the real three-dimensional (3-D) world. Forming the sensor outputs into a common format does not mask these differences. In this paper, the conditions under which fusion of the two sensor signals is possible are explored. The program currently planned to investigate this problem is briefly discussed.
NASA Astrophysics Data System (ADS)
Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.
2014-08-01
The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.
NASA Astrophysics Data System (ADS)
Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun
2014-09-01
Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.
Imaging optical sensor arrays.
Walt, David R
2002-10-01
Imaging optical fibres have been etched to prepare microwell arrays. These microwells have been loaded with sensing materials such as bead-based sensors and living cells to create high-density sensor arrays. The extremely small sizes and volumes of the wells enable high sensitivity and high information content sensing capabilities.
Payload Configurations for Efficient Image Acquisition - Indian Perspective
NASA Astrophysics Data System (ADS)
Samudraiah, D. R. M.; Saxena, M.; Paul, S.; Narayanababu, P.; Kuriakose, S.; Kiran Kumar, A. S.
2014-11-01
The world is increasingly depending on remotely sensed data. The data is regularly used for monitoring the earth resources and also for solving problems of the world like disasters, climate degradation, etc. Remotely sensed data has changed our perspective of understanding of other planets. With innovative approaches in data utilization, the demands of remote sensing data are ever increasing. More and more research and developments are taken up for data utilization. The satellite resources are scarce and each launch costs heavily. Each launch is also associated with large effort for developing the hardware prior to launch. It is also associated with large number of software elements and mathematical algorithms post-launch. The proliferation of low-earth and geostationary satellites has led to increased scarcity in the available orbital slots for the newer satellites. Indian Space Research Organization has always tried to maximize the utility of satellites. Multiple sensors are flown on each satellite. In each of the satellites, sensors are designed to cater to various spectral bands/frequencies, spatial and temporal resolutions. Bhaskara-1, the first experimental satellite started with 2 bands in electro-optical spectrum and 3 bands in microwave spectrum. The recent Resourcesat-2 incorporates very efficient image acquisition approach with multi-resolution (3 types of spatial resolution) multi-band (4 spectral bands) electro-optical sensors (LISS-4, LISS-3* and AWiFS). The system has been designed to provide data globally with various data reception stations and onboard data storage capabilities. Oceansat-2 satellite has unique sensor combination with 8 band electro-optical high sensitive ocean colour monitor (catering to ocean and land) along with Ku band scatterometer to acquire information on ocean winds. INSAT- 3D launched recently provides high resolution 6 band image data in visible, short-wave, mid-wave and long-wave infrared spectrum. It also has 19 band sounder for providing vertical profile of water vapour, temperature, etc. The same system has data relay transponders for acquiring data from weather stations. The payload configurations have gone through significant changes over the years to increase data rate per kilogram of payload. Future Indian remote sensing systems are planned with very high efficient ways of image acquisition. This paper analyses the strides taken by ISRO (Indian Space research Organisation) in achieving high efficiency in remote sensing image data acquisition. Parameters related to efficiency of image data acquisition are defined and a methodology is worked out to compute the same. Some of the Indian payloads are analysed with respect to some of the system/ subsystem parameters that decide the configuration of payload. Based on the analysis, possible configuration approaches that can provide high efficiency are identified. A case study is carried out with improved configuration and the results of efficiency improvements are reported. This methodology may be used for assessing other electro-optical payloads or missions and can be extended to other types of payloads and missions.
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
Park, Keunyeol; Song, Minkyu
2018-01-01
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.
Park, Keunyeol; Song, Minkyu; Kim, Soo Youn
2018-02-24
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
Intelligent Sensors: Strategies for an Integrated Systems Approach
NASA Technical Reports Server (NTRS)
Chitikeshi, Sanjeevi; Mahajan, Ajay; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando
2005-01-01
This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).
Convolution Operation of Optical Information via Quantum Storage
NASA Astrophysics Data System (ADS)
Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan
2017-06-01
We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.
Evaluation of an Interferometric Sensor for In-Space Detection of Gas Leaks
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Korman, Valentin; Sinko, John; Hendrickson, Adam
2009-01-01
Space mission planning often involves long-term storage of volatile liquids or high-pressure gases. These may include cryogenic fuels and oxidizers, high-pressure gases, and life-support-critical consumables. The risk associated with the storage of fluids and gases in space systems has long been an issue and the ability to retain these fluids is often tied to mission success. A leak in the storage or distribution system can cause many different problems, including a simple, but mission endangering, loss of inventory or, in severe cases, unbalanced thrust loads on a flight vehicle. Cryogenic propellants are especially difficult to store, especially over a long duration. The propellant can boil off and be lost through the insulating walls of the tank or simple thermal cycling of the fittings, valves, and propellant feed lines may unseat seals allowing the fluid to escape. Current NASA missions call for long-duration in-space storage of propellants, oxidizers, and life support supplies. Leaks of a scale detectable through a pressure drop in the storage tank are often catastrophic and have long been the focus of ground-based mitigation efforts where redundant systems are often employed. However, there is presently no technology available for detecting and monitoring low-level, but still mission-endangering, gas leaks in space. Standard in-space gas detection methods either have a very limited pressure range over which they operate effectively or are limited to certain gases. Mass spectrometer systems are able to perform the detection tasks, but their size, mass and use of high voltage, which could potentially lead to an arc that ignites a combustible propellent, severely limit their usefulness in a space system. In this paper, we present results from testing of the light-based interferometric gas monitoring and leak detection sensor shown in Fig. 1. The output of the sensor is an interference fringe pattern that is a function of the gas density, and commensurate index of refraction, in the sample region. Changes in the density of gas cause the interference fringes to move across a photodiode detector, providing a temporal history of the leak. The sensor is fiber coupled and constructed from solid optics, allowing for placement almost anywhere on the spacecraft. It is also advantageous in that it consumes very little power and does not introduce an ignition source. Data are presented demonstrating the capability of the sensor to measure density variations in different gas species. In addition, the transient response of the sensor in vacuum is demonstrated. These data extend and improve upon the results previously presented by the authors in Ref. [1].
Hierarchical storage of large volume of multidector CT data using distributed servers
NASA Astrophysics Data System (ADS)
Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David
2006-03-01
Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Mitra, Sunanda
2002-05-01
Due to the huge volumes of radiographic images to be managed in hospitals, efficient compression techniques yielding no perceptual loss in the reconstructed images are becoming a requirement in the storage and management of such datasets. A wavelet-based multi-scale vector quantization scheme that generates a global codebook for efficient storage and transmission of medical images is presented in this paper. The results obtained show that even at low bit rates one is able to obtain reconstructed images with perceptual quality higher than that of the state-of-the-art scalar quantization method, the set partitioning in hierarchical trees.
High-speed imaging optical techniques for shockwave and droplets atomization analysis
NASA Astrophysics Data System (ADS)
Slangen, Pierre R.; Lauret, Pierre; Heymes, Frederic; Aprin, Laurent; Lecysyn, Nicolas
2016-12-01
Droplets atomization by shockwave can act as a consequence in domino effects on an industrial facility: aggression of a storage tank (projectile from previous event, for example) can cause leakage of hazardous material (toxic and flammable). As the accident goes on, a secondary event can cause blast generation, impacting the droplets and resulting in their atomization. Therefore, exchange surface increase impacts the evaporation rate. This can be an issue in case of dispersion of such a cloud. The experiments conducted in the lab generate a shockwave with an open-ended shock tube to break up liquid droplets. As the expected shockwave speed is about 400 m/s (˜Mach 1.2), the interaction with falling drops is very short. High-speed imaging is performed at about 20,000 fps. The shockwave is measured using both overpressure sensors: particle image velocimetry and pure in line shadowgraphy. The size of fragmented droplets is optically measured by direct shadowgraphy simultaneously in different directions. In these experiments, secondary breakups of a droplet into an important number of smaller droplets from the shockwave-induced flow are shown. The results of the optical characterizations are discussed in terms of shape, velocity, and size.
Solid-state image sensor with focal-plane digital photon-counting pixel array
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
1995-01-01
A photosensitive layer such as a-Si for a UV/visible wavelength band is provided for low light level imaging with at least a separate CMOS amplifier directly connected to each PIN photodetector diode to provide a focal-plane array of NxN pixels, and preferably a separate photon-counting CMOS circuit directly connected to each CMOS amplifier, although one row of counters may be time shared for reading out the photon flux rate of each diode in the array, together with a buffer memory for storing all rows of the NxN image frame before transfer to suitable storage. All CMOS circuitry is preferably fabricated in the same silicon layer as the PIN photodetector diode for a monolithic structure, but when the wavelength band of interest requires photosensitive material different from silicon, the focal-plane array may be fabricated separately on a different semiconductor layer bump-bonded or otherwise bonded for a virtually monolithic structure with one free terminal of each diode directly connected to the input terminal of its CMOS amplifier and digital counter for integration of the photon flux rate at each photodetector of the array.
Network compensation for missing sensors
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1991-01-01
A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
Future Directions for Selected Topics in Physics and Materials Science
2012-07-12
referred to as lightides (e.g. borides , nitrides, phosphides) • Materials for energy conversion, energy storage, energy transport and energy production...Distributed nanosystems and sensors • Strategy for multilayered combinatorics • lightides ( borides , nitrides, phosphides, • New applications for...Strategy for multilayered combinatorics Lightides ( borides , nitrides, phosphides) • Energy conversion, .storage and production • Precision control
Compressive Sensing Image Sensors-Hardware Implementation
Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram
2013-01-01
The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123
Imaging through water turbulence with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2016-09-01
A plenoptic sensor can be used to improve the image formation process in a conventional camera. Through this process, the conventional image is mapped to an image array that represents the image's photon paths along different angular directions. Therefore, it can be used to resolve imaging problems where severe distortion happens. Especially for objects observed at moderate range (10m to 200m) through turbulent water, the image can be twisted to be entirely unrecognizable and correction algorithms need to be applied. In this paper, we show how to use a plenoptic sensor to recover an unknown object in line of sight through significant water turbulence distortion. In general, our approach can be applied to both atmospheric turbulence and water turbulence conditions.
Dual light field and polarization imaging using CMOS diffractive image sensors.
Jayasuriya, Suren; Sivaramakrishnan, Sriram; Chuang, Ellen; Guruaribam, Debashree; Wang, Albert; Molnar, Alyosha
2015-05-15
In this Letter we present, to the best of our knowledge, the first integrated CMOS image sensor that can simultaneously perform light field and polarization imaging without the use of external filters or additional optical elements. Previous work has shown how photodetectors with two stacks of integrated metal gratings above them (called angle sensitive pixels) diffract light in a Talbot pattern to capture four-dimensional light fields. We show, in addition to diffractive imaging, that these gratings polarize incoming light and characterize the response of these sensors to polarization and incidence angle. Finally, we show two applications of polarization imaging: imaging stress-induced birefringence and identifying specular reflections in scenes to improve light field algorithms for these scenes.
NASA Astrophysics Data System (ADS)
Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.
2018-01-01
An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.
Phase aided 3D imaging and modeling: dedicated systems and case studies
NASA Astrophysics Data System (ADS)
Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang
2014-05-01
Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.
UTOFIA: an underwater time-of-flight image acquisition system
NASA Astrophysics Data System (ADS)
Driewer, Adrian; Abrosimov, Igor; Alexander, Jonathan; Benger, Marc; O'Farrell, Marion; Haugholt, Karl Henrik; Softley, Chris; Thielemann, Jens T.; Thorstensen, Jostein; Yates, Chris
2017-10-01
In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream. The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.
Electric potential and electric field imaging
NASA Astrophysics Data System (ADS)
Generazio, E. R.
2017-02-01
The technology and methods for remote quantitative imaging of electrostatic potentials and electrostatic fields in and around objects and in free space is presented. Electric field imaging (EFI) technology may be applied to characterize intrinsic or existing electric potentials and electric fields, or an externally generated electrostatic field made be used for "illuminating" volumes to be inspected with EFI. The baseline sensor technology (e-Sensor) and its construction, optional electric field generation (quasi-static generator), and current e-Sensor enhancements (ephemeral e-Sensor) are discussed. Demonstrations for structural, electronic, human, and memory applications are shown. This new EFI capability is demonstrated to reveal characterization of electric charge distribution creating a new field of study embracing areas of interest including electrostatic discharge (ESD) mitigation, crime scene forensics, design and materials selection for advanced sensors, dielectric morphology of structures, tether integrity, organic molecular memory, and medical diagnostic and treatment efficacy applications such as cardiac polarization wave propagation and electromyography imaging.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
Protection performance evaluation regarding imaging sensors hardened against laser dazzling
NASA Astrophysics Data System (ADS)
Ritt, Gunnar; Koerber, Michael; Forster, Daniel; Eberle, Bernd
2015-05-01
Electro-optical imaging sensors are widely distributed and used for many different purposes, including civil security and military operations. However, laser irradiation can easily disturb their operational capability. Thus, an adequate protection mechanism for electro-optical sensors against dazzling and damaging is highly desirable. Different protection technologies exist now, but none of them satisfies the operational requirements without any constraints. In order to evaluate the performance of various laser protection measures, we present two different approaches based on triangle orientation discrimination on the one hand and structural similarity on the other hand. For both approaches, image analysis algorithms are applied to images taken of a standard test scene with triangular test patterns which is superimposed by dazzling laser light of various irradiance levels. The evaluation methods are applied to three different sensors: a standard complementary metal oxide semiconductor camera, a high dynamic range camera with a nonlinear response curve, and a sensor hardened against laser dazzling.
A micro-vibration generated method for testing the imaging quality on ground of space remote sensing
NASA Astrophysics Data System (ADS)
Gu, Yingying; Wang, Li; Wu, Qingwen
2018-03-01
In this paper, a novel method is proposed, which can simulate satellite platform micro-vibration and test the impact of satellite micro-vibration on imaging quality of space optical remote sensor on ground. The method can generate micro-vibration of satellite platform in orbit from vibrational degrees of freedom, spectrum, magnitude, and coupling path. Experiment results show that the relative error of acceleration control is within 7%, in frequencies from 7Hz to 40Hz. Utilizing this method, the system level test about the micro-vibration impact on imaging quality of space optical remote sensor can be realized. This method will have an important applications in testing micro-vibration tolerance margin of optical remote sensor, verifying vibration isolation and suppression performance of optical remote sensor, exploring the principle of micro-vibration impact on imaging quality of optical remote sensor.
Usefulness of chemical-shift MRI in discriminating increased liver echogenicity in glycogenosis.
Pozzato, C; Dall'asta, C; Radaelli, G; Torcoletti, M; Formenti, A; Riva, E; Cornalba, G; Pontiroli, A E
2007-11-01
Glycogen storage diseases are inherited defects which cause accumulation of glycogen in the tissues. Hepatic steatosis is defined as accumulation of fat within hepatocytes. On sonography, liver shows increased echogenicity both in glycogen storage diseases and steatosis. Liver hyperechogenicity in glycogen storage diseases may depend on accumulation of glycogen and/or fat. Chemical-shift magnetic resonance imaging can discriminate tissues only containing water from those containing both fat and water. The primary aim of the present study was to evaluate the usefulness of liver chemical-shift magnetic resonance imaging for detecting liver steatosis in patients with metabolic impairment due to glycogen storage diseases. Twelve patients with type I (n=8) or type III (n=4) glycogen storage diseases were studied and compared to 12 obese-overweight subjects with known liver steatosis. As control group 12 lean normal voluntary subjects were recruited. Liver was evaluated by sonography and chemical-shift magnetic resonance imaging to calculate hepatic fat fraction. A significant difference in echogenicity between patients with glycogen storage diseases and normal subjects was observed (p<0.05), while this difference was not present between overweight-obese and glycogen storage diseases patients. On the contrary, fat fraction was similar between glycogen storage diseases patients and normal subjects and different between glycogen storage diseases patients and overweight-obese (p<0.05). The present data suggest that chemical-shift magnetic resonance imaging may exclude fat deposition as a cause of liver hyperechogenicity in subjects with glycogen storage diseases.
A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer
Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie
2014-01-01
Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727
Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach
NASA Astrophysics Data System (ADS)
Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai
2006-01-01
With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.
Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface
Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun
2016-01-01
Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging. PMID:27246668
Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications
NASA Technical Reports Server (NTRS)
Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Staller, C.; Zhou, Z;
1994-01-01
JPL, under sponsorship from the NASA Office of Advanced Concepts and Technology, has been developing a second-generation solid-state image sensor technology. Charge-coupled devices (CCD) are a well-established first generation image sensor technology. For both commercial and NASA applications, CCDs have numerous shortcomings. In response, the active pixel sensor (APS) technology has been under research. The major advantages of APS technology are the ability to integrate on-chip timing, control, signal-processing and analog-to-digital converter functions, reduced sensitivity to radiation effects, low power operation, and random access readout.
Can direct electron detectors outperform phosphor-CCD systems for TEM?
NASA Astrophysics Data System (ADS)
Moldovan, G.; Li, X.; Kirkland, A.
2008-08-01
A new generation of imaging detectors is being considered for application in TEM, but which device architectures can provide the best images? Monte Carlo simulations of the electron-sensor interaction are used here to calculate the expected modulation transfer of monolithic active pixel sensors (MAPS), hybrid active pixel sensors (HAPS) and double sided Silicon strip detectors (DSSD), showing that ideal and nearly ideal transfer can be obtained using DSSD and MAPS sensors. These results highly recommend the replacement of current phosphor screen and charge coupled device imaging systems with such new directly exposed position sensitive electron detectors.
Automation of peanut drying with a sensor network including an in-shell kernel moisture sensor
USDA-ARS?s Scientific Manuscript database
Peanut drying is an essential task in the processing and handling of peanuts. Peanuts leave the fields with kernel moisture contents > 20% wet basis and need to be dried to < 10.5% w.b. for grading and storage purposes. Current peanut drying processes utilize decision support software based on model...
NASA Astrophysics Data System (ADS)
Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza
2015-05-01
In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.
Imaging through turbulence using a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2015-09-01
Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.
Digital dental radiology in Belgium: a nationwide survey.
Snel, Robin; Van De Maele, Ellen; Politis, Constantinus; Jacobs, Reinhilde
2018-06-27
The aim of this study is to analyse the use of digital dental radiology in Belgium, by focussing on the use of extraoral and intraoral radiographic techniques, digitalisation and image communication. A nationwide survey has been performed amongst Belgian general dentists and dental specialists. Questionnaires were distributed digitally via mailings lists and manually on multiple refresher courses and congresses throughout the country. The overall response rate was 30%. Overall, 94% of the respondents had access to an intraoral radiographic unit, 76% had access to a panoramic unit, 21% has an attached cephalometric arm. One in five Belgian dentists also seem to have direct access to a cone beam CT. 90% of all intraoral radiography unit worked with digital detectors, while this was 91% for panoramic units (with or without cephalometrics). In 70% of the cases, general dental practitioners with a digital intraoral unit used a storage phosphor plate while in 30% of the cases they used sensor technology (charge-coupled device or complementary metal-oxide-semiconductor). The most common method for professional image transfer appeared to be email. Finally, 16% of all respondents used a calibrated monitor for image analysis. The survey indicates that 90% of the respondents, Belgian dentists, make use of digital image techniques. For sharing images, general dental practitioners mainly use methods such as printout and e-mail. The usage of calibrated monitors, however, is not well established yet.
Real-time digital signal processing for live electro-optic imaging.
Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro
2009-08-31
We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.
Methods and apparatuses for detection of radiation with semiconductor image sensors
Cogliati, Joshua Joseph
2018-04-10
A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.
1992-10-01
The concept of using a temperature sensor in a pill as a clinical thermometer Is good, but mobility of the pill makes it less suitable as a research...Human Technologies, Inc. (CorTempt m; St. Petersburg, FL). Both systems included an ingestible temperature sensor /pill, a receiver, and a data storage...telemetry pills did, and T, and T. showed a faster response to changing core temperature than did T,. The concept of using a temperature sensor in a
Mixed Traffic Information Collection System based on Pressure Sensor
NASA Astrophysics Data System (ADS)
Liao, Wenzhe; Liu, Mingsheng; Meng, Qingli
The traffic information collection is the base of Intelligent Traffic.At present, there exist mixed traffic situation in urban road in China. This paper researched and implemented a system through collecting the vehicle and bicycle mixed traffic flow parameters based on pressure sensor. According to information collection requirements, we selected pressure sensor, designed the data collection, storage and other hardware circuitries and information processing software. The experiment shows that the system can meet the demand of traffic information collection in the actual.
Commercial applications for optical data storage
NASA Astrophysics Data System (ADS)
Tas, Jeroen
1991-03-01
Optical data storage has spurred the market for document imaging systems. These systems are increasingly being used to electronically manage the processing, storage and retrieval of documents. Applications range from straightforward archives to sophisticated workflow management systems. The technology is developing rapidly and within a few years optical imaging facilities will be incorporated in most of the office information systems. This paper gives an overview of the status of the market, the applications and the trends of optical imaging systems.
Forensic use of photo response non-uniformity of imaging sensors and a counter method.
Dirik, Ahmet Emir; Karaküçük, Ahmet
2014-01-13
Analogous to use of bullet scratches in forensic science, the authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, photo-response non-uniformity noise (PRNU) has been used in source camera identification (SCI). However, this technique can be used maliciously to track or inculpate innocent people. To impede such tracking, PRNU noise should be suppressed significantly. Based on this motivation, we propose a counter forensic method to deceive SCI. Experimental results show that it is possible to impede PRNU-based camera identification for various imaging sensors while preserving the image quality.
[Medical image compression: a review].
Noreña, Tatiana; Romero, Eduardo
2013-01-01
Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.
NASA Technical Reports Server (NTRS)
1999-01-01
Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.
Zhang, Wenlu; Chen, Fengyi; Ma, Wenwen; Rong, Qiangzhou; Qiao, Xueguang; Wang, Ruohui
2018-04-16
A fringe visibility enhanced fiber-optic Fabry-Perot interferometer based ultrasonic sensor is proposed and experimentally demonstrated for seismic physical model imaging. The sensor consists of a graded index multimode fiber collimator and a PTFE (polytetrafluoroethylene) diaphragm to form a Fabry-Perot interferometer. Owing to the increase of the sensor's spectral sideband slope and the smaller Young's modulus of the PTFE diaphragm, a high response to both continuous and pulsed ultrasound with a high SNR of 42.92 dB in 300 kHz is achieved when the spectral sideband filter technique is used to interrogate the sensor. The ultrasonic reconstructed images can clearly differentiate the shape of models with a high resolution.
Islam, Mohammad Tariqul; Islam, Md. Moinul; Samsuzzaman, Md.; Faruque, Mohammad Rashed Iqbal; Misran, Norbahiah
2015-01-01
This paper presents a negative index metamaterial incorporated UWB antenna with an integration of complementary SRR (split-ring resonator) and CLS (capacitive loaded strip) unit cells for microwave imaging sensor applications. This metamaterial UWB antenna sensor consists of four unit cells along one axis, where each unit cell incorporates a complementary SRR and CLS pair. This integration enables a design layout that allows both a negative value of permittivity and a negative value of permeability simultaneous, resulting in a durable negative index to enhance the antenna sensor performance for microwave imaging sensor applications. The proposed MTM antenna sensor was designed and fabricated on an FR4 substrate having a thickness of 1.6 mm and a dielectric constant of 4.6. The electrical dimensions of this antenna sensor are 0.20 λ × 0.29 λ at a lower frequency of 3.1 GHz. This antenna sensor achieves a 131.5% bandwidth (VSWR < 2) covering the frequency bands from 3.1 GHz to more than 15 GHz with a maximum gain of 6.57 dBi. High fidelity factor and gain, smooth surface-current distribution and nearly omni-directional radiation patterns with low cross-polarization confirm that the proposed negative index UWB antenna is a promising entrant in the field of microwave imaging sensors. PMID:26007721
Islam, Mohammad Tariqul; Islam, Md Moinul; Samsuzzaman, Md; Faruque, Mohammad Rashed Iqbal; Misran, Norbahiah
2015-05-20
This paper presents a negative index metamaterial incorporated UWB antenna with an integration of complementary SRR (split-ring resonator) and CLS (capacitive loaded strip) unit cells for microwave imaging sensor applications. This metamaterial UWB antenna sensor consists of four unit cells along one axis, where each unit cell incorporates a complementary SRR and CLS pair. This integration enables a design layout that allows both a negative value of permittivity and a negative value of permeability simultaneous, resulting in a durable negative index to enhance the antenna sensor performance for microwave imaging sensor applications. The proposed MTM antenna sensor was designed and fabricated on an FR4 substrate having a thickness of 1.6 mm and a dielectric constant of 4.6. The electrical dimensions of this antenna sensor are 0.20 λ × 0.29 λ at a lower frequency of 3.1 GHz. This antenna sensor achieves a 131.5% bandwidth (VSWR < 2) covering the frequency bands from 3.1 GHz to more than 15 GHz with a maximum gain of 6.57 dBi. High fidelity factor and gain, smooth surface-current distribution and nearly omni-directional radiation patterns with low cross-polarization confirm that the proposed negative index UWB antenna is a promising entrant in the field of microwave imaging sensors.
Can Imageability Help Us Draw the Line between Storage and Composition?
ERIC Educational Resources Information Center
Prado, Elizabeth L.; Ullman, Michael T.
2009-01-01
Language requires both storage and composition. However, exactly what is retrieved from memory and what is assembled remains controversial, especially for inflected words. Here, "imageability effects" is introduced as a new diagnostic of storage and a complement to frequency effects. In 2 studies of past-tense morphology, more reliable…
Ultrasound Picture Archiving And Communication Systems
NASA Astrophysics Data System (ADS)
Koestner, Ken; Hottinger, C. F.
1982-01-01
The ideal ultrasonic image communication and storage system must be flexible in order to optimize speed and minimize storage requirements. Various ultrasonic imaging modalities are quite different in data volume and speed requirements. Static imaging, for example B-Scanning, involves acquisition of a large amount of data that is averaged or accumulated in a desired manner. The image is then frozen in image memory before transfer and storage. Images are commonly a 512 x 512 point array, each point 6 bits deep. Transfer of such an image over a serial line at 9600 baud would require about three minutes. Faster transfer times are possible; for example, we have developed a parallel image transfer system using direct memory access (DMA) that reduces the time to 16 seconds. Data in this format requires 256K bytes for storage. Data compression can be utilized to reduce these requirements. Real-time imaging has much more stringent requirements for speed and storage. The amount of actual data per frame in real-time imaging is reduced due to physical limitations on ultrasound. For example, 100 scan lines (480 points long, 6 bits deep) can be acquired during a frame at a 30 per second rate. In order to transmit and save this data at a real-time rate requires a transfer rate of 8.6 Megabaud. A real-time archiving system would be complicated by the necessity of specialized hardware to interpolate between scan lines and perform desirable greyscale manipulation on recall. Image archiving for cardiology and radiology would require data transfer at this high rate to preserve temporal (cardiology) and spatial (radiology) information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiner, J; Matthews, K; Jia, G
Purpose: To test feasibility of the use of a digital endorectal x-ray sensor for improved image resolution of permanent brachytherapy seed implants compared to conventional CT. Methods: Two phantoms simulating the male pelvic region were used to test the capabilities of a digital endorectal x-ray sensor for imaging permanent brachytherapy seed implants. Phantom 1 was constructed from acrylic plastic with cavities milled in the locations of the prostate and the rectum. The prostate cavity was filled a Styrofoam plug implanted with 10 training seeds. Phantom 2 was constructed from tissue-equivalent gelatins and contained a prostate phantom implanted with 18 strandsmore » of training seeds. For both phantoms, an intraoral digital dental x-ray sensor was placed in the rectum within 2 cm of the seed implants. Scout scans were taken of the phantoms over a limited arc angle using a CT scanner (80 kV, 120–200 mA). The dental sensor was removed from the phantoms and normal helical CT and scout (0 degree) scans using typical parameters for pelvic CT (120 kV, auto-mA) were collected. A shift-and add tomosynthesis algorithm was developed to localize seed plane location normal to detector face. Results: The endorectal sensor produced images with improved resolution compared to CT scans. Seed clusters and individual seed geometry were more discernable using the endorectal sensor. Seed 3D locations, including seeds that were not located in every projection image, were discernable using the shift and add algorithm. Conclusion: This work shows that digital endorectal x-ray sensors are a feasible method for improving imaging of permanent brachytherapy seed implants. Future work will consist of optimizing the tomosynthesis technique to produce higher resolution, lower dose images of 1) permanent brachytherapy seed implants for post-implant dosimetry and 2) fine anatomic details for imaging and managing prostatic disease compared to CT images. Funding: LSU Faculty Start-up Funding. Disclosure: XDR Radiography has loaned our research group the digital x-ray detector used in this work. CoI: None.« less
Characterization of modulated time-of-flight range image sensors
NASA Astrophysics Data System (ADS)
Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.
2009-01-01
A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.
The wide field imager instrument for Athena
NASA Astrophysics Data System (ADS)
Meidinger, Norbert; Nandra, Kirpal; Plattner, Markus; Porro, Matteo; Rau, Arne; Santangelo, Andrea E.; Tenzer, Chris; Wilms, Jörn
2014-07-01
The "Hot and Energetic Universe" has been selected as the science theme for ESA's L2 mission, scheduled for launch in 2028. The proposed Athena X-ray observatory provides the necessary capabilities to achieve the ambitious goals of the science theme. The X-ray mirrors are based on silicon pore optics technology and will have a 12 m focal length. Two complementary camera systems are foreseen which can be moved in and out of the focal plane by an interchange mechanism. These instruments are the actively shielded micro-calorimeter spectrometer X-IFU and the Wide Field Imager (WFI). The WFI will combine an unprecedented survey power through its large field of view of 40 arcmin with a high countrate capability (approx. 1 Crab). It permits a state-of-the-art energy resolution in the energy band of 0.1 keV to 15 keV during the entire mission lifetime (e.g. FWHM <= 150 eV at 6 keV). This performance is accomplished by a set of DEPFET active pixel sensor matrices with a pixel size matching the angular resolution of 5 arcsec (on-axis) of the mirror system. Each DEPFET pixel is a combined detector-amplifier structure with a MOSFET integrated onto a fully depleted 450 micron thick silicon bulk. The signal electrons generated by an X-ray photon are collected in a so-called internal gate below the transistor channel. The resulting change of the conductivity of the transistor channel is proportional to the number of electrons and thus a measure for the photon energy. DEPFETs have already been developed for the "Mercury Imaging X-ray Spectrometer" on-board of ESA's BepiColombo mission. For Athena we develop enhanced sensors with integrated electronic shutter and an additional analog storage area in each pixel. These features improve the peak-to-background ratio of the spectra and minimize dead time. The sensor will be read out with a new, fast, low-noise multi-channel analog signal processor with integrated sequencer and serial analog output. The architecture of sensor and readout ASIC allows readout in full frame mode and window mode as well by addressing selectively arbitrary sub-areas of the sensor allowing time resolution in the order of 10 μs. The further detector electronics has mainly the following tasks: digitization, pre-processing and telemetry of event data as well as supply and control of the detector system. Although the sensor will already be equipped with an on-chip light blocking filter, a filter wheel is necessary to provide an additional external filter, an on-board calibration source, an open position for outgassing, and a closed position for protection of the sensor. The sensor concept provides high quantum efficiency over the entire energy band and we intend to keep the instrumental background as low as possible by designing a graded Z-shield around the sensor. All these properties make the WFI a very powerful survey instrument, significantly surpassing currently existing observatories and in addition allow high-time resolution of the brightest X-ray sources with low pile-up and high efficiency. This manuscript will summarize the current instrument concept and design, the status of the technology development, and the envisaged baseline performance.
Embedded system of image storage based on fiber channel
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Su, Wanxin; Xing, Zhongbao; Wang, Hualong
2008-03-01
In domains of aerospace, aviation, aiming, and optic measure etc., the embedded system of imaging, processing and recording is absolutely necessary, which has small volume, high processing speed and high resolution. But the embedded storage technology becomes system bottleneck because of developing slowly. It is used to use RAID to promote storage speed, but it is unsuitable for the embedded system because of its big volume. Fiber channel (FC) technology offers a new method to develop the high-speed, portable storage system. In order to make storage subsystem meet the needs of high storage rate, make use of powerful Virtex-4 FPGA and high speed fiber channel, advance a project of embedded system of digital image storage based on Xilinx Fiber Channel Arbitrated Loop LogiCORE. This project utilizes Virtex- 4 RocketIO MGT transceivers to transmit the data serially, and connects many Fiber Channel hard drivers by using of Arbitrated Loop optionally. It can achieve 400MBps storage rate, breaks through the bottleneck of PCI interface, and has excellences of high-speed, real-time, portable and massive capacity.
A CMOS high speed imaging system design based on FPGA
NASA Astrophysics Data System (ADS)
Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui
2015-10-01
CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.
An airborne thematic thermal infrared and electro-optical imaging system
NASA Astrophysics Data System (ADS)
Sun, Xiuhong; Shu, Peter
2011-08-01
This paper describes an advanced Airborne Thematic Thermal InfraRed and Electro-Optical Imaging System (ATTIREOIS) and its potential applications. ATTIREOIS sensor payload consists of two sets of advanced Focal Plane Arrays (FPAs) - a broadband Thermal InfraRed Sensor (TIRS) and a four (4) band Multispectral Electro-Optical Sensor (MEOS) to approximate Landsat ETM+ bands 1,2,3,4, and 6, and LDCM bands 2,3,4,5, and 10+11. The airborne TIRS is 3-axis stabilized payload capable of providing 3D photogrammetric images with a 1,850 pixel swathwidth via pushbroom operation. MEOS has a total of 116 million simultaneous sensor counts capable of providing 3 cm spatial resolution multispectral orthophotos for continuous airborne mapping. ATTIREOIS is a complete standalone and easy-to-use portable imaging instrument for light aerial vehicle deployment. Its miniaturized backend data system operates all ATTIREOIS imaging sensor components, an INS/GPS, and an e-Gimbal™ Control Electronic Unit (ECU) with a data throughput of 300 Megabytes/sec. The backend provides advanced onboard processing, performing autonomous raw sensor imagery development, TIRS image track-recovery reconstruction, LWIR/VNIR multi-band co-registration, and photogrammetric image processing. With geometric optics and boresight calibrations, the ATTIREOIS data products are directly georeferenced with an accuracy of approximately one meter. A prototype ATTIREOIS has been configured. Its sample LWIR/EO image data will be presented. Potential applications of ATTIREOIS include: 1) Providing timely and cost-effective, precisely and directly georeferenced surface emissive and solar reflective LWIR/VNIR multispectral images via a private Google Earth Globe to enhance NASA's Earth science research capabilities; and 2) Underflight satellites to support satellite measurement calibration and validation observations.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
A novel optical gating method for laser gated imaging
NASA Astrophysics Data System (ADS)
Ginat, Ran; Schneider, Ron; Zohar, Eyal; Nesher, Ofer
2013-06-01
For the past 15 years, Elbit Systems is developing time-resolved active laser-gated imaging (LGI) systems for various applications. Traditional LGI systems are based on high sensitive gated sensors, synchronized to pulsed laser sources. Elbit propriety multi-pulse per frame method, which is being implemented in LGI systems, improves significantly the imaging quality. A significant characteristic of the LGI is its ability to penetrate a disturbing media, such as rain, haze and some fog types. Current LGI systems are based on image intensifier (II) sensors, limiting the system in spectral response, image quality, reliability and cost. A novel propriety optical gating module was developed in Elbit, untying the dependency of LGI system on II. The optical gating module is not bounded to the radiance wavelength and positioned between the system optics and the sensor. This optical gating method supports the use of conventional solid state sensors. By selecting the appropriate solid state sensor, the new LGI systems can operate at any desired wavelength. In this paper we present the new gating method characteristics, performance and its advantages over the II gating method. The use of the gated imaging systems is described in a variety of applications, including results from latest field experiments.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.
Park, Chulhee; Kang, Moon Gi
2016-05-18
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
Park, Chulhee; Kang, Moon Gi
2016-01-01
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381
Monitoring the long term stability of the IRS-P6 AWiFS sensor using the Sonoran and RVPN sites
NASA Astrophysics Data System (ADS)
Chander, Gyanesh; Sampath, Aparajithan; Angal, Amit; Choi, Taeyoung; Xiong, Xiaoxiong
2010-10-01
This paper focuses on radiometric and geometric assessment of the Indian Remote Sensing (IRS-P6) Advanced Wide Field Sensor (AWiFS) sensor using the Sonoran desert and Railroad Valley Playa, Nevada (RVPN) ground sites. Imageto- Image (I2I) accuracy and relative band-to-band (B2B) accuracy were measured. I2I accuracy of the AWiFS imagery was assessed by measuring the imagery against Landsat Global Land Survey (GLS) 2000. The AWiFS images were typically registered to within one pixel to the GLS 2000 mosaic images. The B2B process used the same concepts as the I2I, except instead of a reference image and a search image; the individual bands of a multispectral image are tested against each other. The B2B results showed that all the AWiFS multispectral bands are registered to sub-pixel accuracy. Using the limited amount of scenes available over these ground sites, the reflective bands of AWiFS sensor indicate a long-term drift in the top-of-atmosphere (TOA) reflectance. Because of the limited availability of AWiFS scenes over these ground sites, a comprehensive evaluation of the radiometric stability using these sites is not possible. In order to overcome this limitation, a cross-comparison between AWiFS and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) was performed using image statistics based on large common areas observed by the sensors within 30 minutes. Regression curves and coefficients of determination for the TOA trends from these sensors were generated to quantify the uncertainty in these relationships and to provide an assessment of the calibration differences between these sensors.
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.