Flexible phosphor sensors: a digital supplement or option to rigid sensors.
Glazer, Howard S
2014-01-01
An increasing number of dental practices are upgrading from film radiography to digital radiography, for reasons that include faster image processing, easier image access, better patient education, enhanced data storage, and improved office productivity. Most practices that have converted to digital technology use rigid, or direct, sensors. Another digital option is flexible phosphor sensors, also called indirect sensors or phosphor storage plates (PSPs). Flexible phosphor sensors can be advantageous for use with certain patients who may be averse to direct sensors, and they can deliver a larger image area. Additionally, sensor cost for replacement PSPs is considerably lower than for hard sensors. As such, flexible phosphor sensors appear to be a viable supplement or option to direct sensors.
NASA Astrophysics Data System (ADS)
Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi
2006-02-01
Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.
Smart image sensors: an emerging key technology for advanced optical measurement and microsystems
NASA Astrophysics Data System (ADS)
Seitz, Peter
1996-08-01
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
Evaluation of physical properties of different digital intraoral sensors.
Al-Rawi, Wisam; Teich, Sorin
2013-09-01
Digital technologies provide clinically acceptable results comparable to traditional films while having other advantages such as the ability to store and manipulate images, immediate evaluation of the image diagnostic quality, possible reduction in patient radiation exposure, and so on. The purpose of this paper is to present the results of the evaluation of the physical design of eight CMOS digital intraoral sensors. Sensors tested included: XDR (Cyber Medical Imaging, Los Angeles, CA, USA), RVG 6100 (Carestream Dental LLC, Atlanta, GA, USA), Platinum (DEXIS LLC., Hatfield, PA, USA), CDR Elite (Schick Technologies, Long Island City, NY, USA), ProSensor (Planmeca, Helsinki, Finland), EVA (ImageWorks, Elmsford, NY, USA), XIOS Plus (Sirona, Bensheim, Germany), and GXS-700 (Gendex Dental Systems, Hatfield, PA, USA). The sensors were evaluated for cable configuration, connectivity interface, presence of back-scattering radiation shield, plate thickness, active sensor area, and comparing the active imaging area to the outside casing and to conventional radiographic films. There were variations among the physical design of different sensors. For most parameters tested, a lack of standardization exists in the industry. The results of this study revealed that these details are not always available through the material provided by the manufacturers and are often not advertised. For all sensor sizes, active imaging area was smaller compared with conventional films. There was no sensor in the group that had the best physical design. Data presented in this paper establishes a benchmark for comparing the physical design of digital intraoral sensors.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy
NASA Technical Reports Server (NTRS)
Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)
2011-01-01
Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.
NASA Astrophysics Data System (ADS)
Takehara, Hironari; Miyazawa, Kazuya; Noda, Toshihiko; Sasagawa, Kiyotaka; Tokuda, Takashi; Kim, Soo Hyeon; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun
2014-01-01
A CMOS image sensor with stacked photodiodes was fabricated using 0.18 µm mixed signal CMOS process technology. Two photodiodes were stacked at the same position of each pixel of the CMOS image sensor. The stacked photodiodes consist of shallow high-concentration N-type layer (N+), P-type well (PW), deep N-type well (DNW), and P-type substrate (P-sub). PW and P-sub were shorted to ground. By monitoring the voltage of N+ and DNW individually, we can observe two monochromatic colors simultaneously without using any color filters. The CMOS image sensor is suitable for fluorescence imaging, especially contact imaging such as a lensless observation system of digital enzyme-linked immunosorbent assay (ELISA). Since the fluorescence increases with time in digital ELISA, it is possible to observe fluorescence accurately by calculating the difference from the initial relation between the pixel values for both photodiodes.
USB video image controller used in CMOS image sensor
NASA Astrophysics Data System (ADS)
Zhang, Wenxuan; Wang, Yuxia; Fan, Hong
2002-09-01
CMOS process is mainstream technique in VLSI, possesses high integration. SE402 is multifunction microcontroller, which integrates image data I/O ports, clock control, exposure control and digital signal processing into one chip. SE402 reduces the number of chips and PCB's room. The paper studies emphatically on USB video image controller used in CMOS image sensor and give the application on digital still camera.
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
Park, Keunyeol; Song, Minkyu
2018-01-01
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.
Park, Keunyeol; Song, Minkyu; Kim, Soo Youn
2018-02-24
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan
2015-03-01
Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.
CMOS Image Sensors: Electronic Camera On A Chip
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
Recent advancements in CMOS image sensor technology are reviewed, including both passive pixel sensors and active pixel sensors. On- chip analog to digital converters and on-chip timing and control circuits permit realization of an electronic camera-on-a-chip. Highly miniaturized imaging systems based on CMOS image sensor technology are emerging as a competitor to charge-coupled devices for low cost uses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiner, J; Matthews, K; Jia, G
Purpose: To test feasibility of the use of a digital endorectal x-ray sensor for improved image resolution of permanent brachytherapy seed implants compared to conventional CT. Methods: Two phantoms simulating the male pelvic region were used to test the capabilities of a digital endorectal x-ray sensor for imaging permanent brachytherapy seed implants. Phantom 1 was constructed from acrylic plastic with cavities milled in the locations of the prostate and the rectum. The prostate cavity was filled a Styrofoam plug implanted with 10 training seeds. Phantom 2 was constructed from tissue-equivalent gelatins and contained a prostate phantom implanted with 18 strandsmore » of training seeds. For both phantoms, an intraoral digital dental x-ray sensor was placed in the rectum within 2 cm of the seed implants. Scout scans were taken of the phantoms over a limited arc angle using a CT scanner (80 kV, 120–200 mA). The dental sensor was removed from the phantoms and normal helical CT and scout (0 degree) scans using typical parameters for pelvic CT (120 kV, auto-mA) were collected. A shift-and add tomosynthesis algorithm was developed to localize seed plane location normal to detector face. Results: The endorectal sensor produced images with improved resolution compared to CT scans. Seed clusters and individual seed geometry were more discernable using the endorectal sensor. Seed 3D locations, including seeds that were not located in every projection image, were discernable using the shift and add algorithm. Conclusion: This work shows that digital endorectal x-ray sensors are a feasible method for improving imaging of permanent brachytherapy seed implants. Future work will consist of optimizing the tomosynthesis technique to produce higher resolution, lower dose images of 1) permanent brachytherapy seed implants for post-implant dosimetry and 2) fine anatomic details for imaging and managing prostatic disease compared to CT images. Funding: LSU Faculty Start-up Funding. Disclosure: XDR Radiography has loaned our research group the digital x-ray detector used in this work. CoI: None.« less
Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy
NASA Astrophysics Data System (ADS)
Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.
2016-05-01
Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.
A Multi-Sensor Aerogeophysical Study of Afghanistan
2007-01-01
magnetometer coupled with an Applied Physics 539 3-axis fluxgate mag- netometer for compensation of the aircraft field; • an Applanix DSS 301 digital...survey. DATA COlleCTION AND PROCeSSINg Photogrammetry More than 65,000 high-resolution photogram- metric images were collected using an Applanix Digital...HSI L-Band Polarimetric Imaging Radar KGPS Dual Gravity Meters Common Sensor Bomb-bay Pallet Applanix DSS Camera Sensor Suite • Magnetometer • Gravity
Forensics for flatbed scanners
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Franz, Elke; Winkler, Antje
2007-02-01
Within this article, we investigate possibilities for identifying the origin of images acquired with flatbed scanners. A current method for the identification of digital cameras takes advantage of image sensor noise, strictly speaking, the spatial noise. Since flatbed scanners and digital cameras use similar technologies, the utilization of image sensor noise for identifying the origin of scanned images seems to be possible. As characterization of flatbed scanner noise, we considered array reference patterns and sensor line reference patterns. However, there are particularities of flatbed scanners which we expect to influence the identification. This was confirmed by extensive tests: Identification was possible to a certain degree, but less reliable than digital camera identification. In additional tests, we simulated the influence of flatfielding and down scaling as examples for such particularities of flatbed scanners on digital camera identification. One can conclude from the results achieved so far that identifying flatbed scanners is possible. However, since the analyzed methods are not able to determine the image origin in all cases, further investigations are necessary.
NASA Technical Reports Server (NTRS)
1995-01-01
Intelligent Vision Systems, Inc. (InVision) needed image acquisition technology that was reliable in bad weather for its TDS-200 Traffic Detection System. InVision researchers used information from NASA Tech Briefs and assistance from Johnson Space Center to finish the system. The NASA technology used was developed for Earth-observing imaging satellites: charge coupled devices, in which silicon chips convert light directly into electronic or digital images. The TDS-200 consists of sensors mounted above traffic on poles or span wires, enabling two sensors to view an intersection; a "swing and sway" feature to compensate for movement of the sensors; a combination of electronic shutter and gain control; and sensor output to an image digital signal processor, still frame video and optionally live video.
Principles and Applications of Imaging Radar, Manual of Remote Sensing, 3rd Edition, Volume 2
NASA Astrophysics Data System (ADS)
Moran, M. Susan
Aerial photographs and digital images from orbiting optical scanners are a daily source of information for the general public through newspapers, television, magazines, and posters. Such images are just as prevalent in scientific journal literature. In the last 6 months, more than half of the weekly issues of Eos published an image acquired by a remote digital sensor. As a result, most geoscientists are familiar with the characteristics and even the acronyms of the current satellites and their optical sensors, common detector filters, and image presentation. In many cases, this familiarity has bred contempt. This is so because the limitations of optical sensors (imaging in the visible and infrared portions of the electromagnetic spectrum) can be quite formidable. Images of the surface cannot be acquired through clouds, and image quality is impaired with low-light conditions (such as at polar regions), atmospheric scattering and absorption, and variations in sun/sensor/surface geometry.
Low-Light Image Enhancement Using Adaptive Digital Pixel Binning
Yoo, Yoonjong; Im, Jaehyun; Paik, Joonki
2015-01-01
This paper presents an image enhancement algorithm for low-light scenes in an environment with insufficient illumination. Simple amplification of intensity exhibits various undesired artifacts: noise amplification, intensity saturation, and loss of resolution. In order to enhance low-light images without undesired artifacts, a novel digital binning algorithm is proposed that considers brightness, context, noise level, and anti-saturation of a local region in the image. The proposed algorithm does not require any modification of the image sensor or additional frame-memory; it needs only two line-memories in the image signal processor (ISP). Since the proposed algorithm does not use an iterative computation, it can be easily embedded in an existing digital camera ISP pipeline containing a high-resolution image sensor. PMID:26121609
Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6
NASA Technical Reports Server (NTRS)
Lee, George
1993-01-01
A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored.
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
Bio-inspired multi-mode optic flow sensors for micro air vehicles
NASA Astrophysics Data System (ADS)
Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik
2013-06-01
Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.
Depth map generation using a single image sensor with phase masks.
Jang, Jinbeum; Park, Sangwoo; Jo, Jieun; Paik, Joonki
2016-06-13
Conventional stereo matching systems generate a depth map using two or more digital imaging sensors. It is difficult to use the small camera system because of their high costs and bulky sizes. In order to solve this problem, this paper presents a stereo matching system using a single image sensor with phase masks for the phase difference auto-focusing. A novel pattern of phase mask array is proposed to simultaneously acquire two pairs of stereo images. Furthermore, a noise-invariant depth map is generated from the raw format sensor output. The proposed method consists of four steps to compute the depth map: (i) acquisition of stereo images using the proposed mask array, (ii) variational segmentation using merging criteria to simplify the input image, (iii) disparity map generation using the hierarchical block matching for disparity measurement, and (iv) image matting to fill holes to generate the dense depth map. The proposed system can be used in small digital cameras without additional lenses or sensors.
Single-shot digital holography by use of the fractional Talbot effect.
Martínez-León, Lluís; Araiza-E, María; Javidi, Bahram; Andrés, Pedro; Climent, Vicent; Lancis, Jesús; Tajahuerce, Enrique
2009-07-20
We present a method for recording in-line single-shot digital holograms based on the fractional Talbot effect. In our system, an image sensor records the interference between the light field scattered by the object and a properly codified parallel reference beam. A simple binary two-dimensional periodic grating is used to codify the reference beam generating a periodic three-step phase distribution over the sensor plane by fractional Talbot effect. This provides a method to perform single-shot phase-shifting interferometry at frame rates only limited by the sensor capabilities. Our technique is well adapted for dynamic wavefront sensing applications. Images of the object are digitally reconstructed from the digital hologram. Both computer simulations and experimental results are presented.
A safety monitoring system for taxi based on CMOS imager
NASA Astrophysics Data System (ADS)
Liu, Zhi
2005-01-01
CMOS image sensors now become increasingly competitive with respect to their CCD counterparts, while adding advantages such as no blooming, simpler driving requirements and the potential of on-chip integration of sensor, analogue circuitry, and digital processing functions. A safety monitoring system for taxi based on cmos imager that can record field situation when unusual circumstance happened is described in this paper. The monitoring system is based on a CMOS imager (OV7120), which can output digital image data through parallel pixel data port. The system consists of a CMOS image sensor, a large capacity NAND FLASH ROM, a USB interface chip and a micro controller (AT90S8515). The structure of whole system and the test data is discussed and analyzed in detail.
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Real-time digital signal processing for live electro-optic imaging.
Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro
2009-08-31
We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.
An automatic analyzer of solid state nuclear track detectors using an optic RAM as image sensor
NASA Astrophysics Data System (ADS)
Staderini, Enrico Maria; Castellano, Alfredo
1986-02-01
An optic RAM is a conventional digital random access read/write dynamic memory device featuring a quartz windowed package and memory cells regularly ordered on the chip. Such a device is used as an image sensor because each cell retains data stored in it for a time depending on the intensity of the light incident on the cell itself. The authors have developed a system which uses an optic RAM to acquire and digitize images from electrochemically etched CR39 solid state nuclear track detectors (SSNTD) in the track count rate up to 5000 cm -2. On the digital image so obtained, a microprocessor, with appropriate software, performs image analysis, filtering, tracks counting and evaluation.
Digital Photography and Its Impact on Instruction.
ERIC Educational Resources Information Center
Lantz, Chris
Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…
NASA Astrophysics Data System (ADS)
Green, John R.; Robinson, Timothy
2015-05-01
There is a growing interest in developing helmet-mounted digital imaging systems (HMDIS) for integration into military aircraft cockpits. This interest stems from the multiple advantages of digital vs. analog imaging such as image fusion from multiple sensors, data processing to enhance the image contrast, superposition of non-imaging data over the image, and sending images to remote location for analysis. There are several properties an HMDIS must have in order to aid the pilot during night operations. In addition to the resolution, image refresh rate, dynamic range, and sensor uniformity over the entire Focal Plane Array (FPA); the imaging system must have the sensitivity to detect the limited night light available filtered through cockpit transparencies. Digital sensor sensitivity is generally measured monochromatically using a laser with a wavelength near the peak detector quantum efficiency, and is generally reported as either the Noise Equivalent Power (NEP) or Noise Equivalent Irradiance (NEI). This paper proposes a test system that measures NEI of Short-Wave Infrared (SWIR) digital imaging systems using a broadband source that simulates the night spectrum. This method has a few advantages over a monochromatic method. Namely, the test conditions provide spectrum closer to what is experienced by the end-user, and the resulting NEI may be compared directly to modeled night glow irradiance calculation. This comparison may be used to assess the Technology Readiness Level of the imaging system for the application. The test system is being developed under a Cooperative Research and Development Agreement (CRADA) with the Air Force Research Laboratory.
A data base of ASAS digital imagery. [Advanced Solid-state Array Spectroradiometer
NASA Technical Reports Server (NTRS)
Irons, James R.; Meeson, Blanche W.; Dabney, Philip W.; Kovalick, William M.; Graham, David W.; Hahn, Daniel S.
1992-01-01
The Advanced Solid-State Array Spectroradiometer (ASAS) is an airborne, off-nadir tilting, imaging spectroradiometer that acquires digital image data for 29 spectral bands in the visible and near-infrared. The sensor is used principally for studies of the bidirectional distribution of solar radiation scattered by terrestial surfaces. ASAS has acquired data for a number of terrestial ecosystem field experiments and investigators have received over 170 radiometrically corrected, multiangle, digital image data sets. A database of ASAS digital imagery has been established in the Pilot Land Data System (PLDS) at the NASA/Goddard Space Flight Center to provide access to these data by the scientific community. ASAS, its processed data, and the PLDS are described, together with recent improvements to the sensor system.
A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.
Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing
2016-09-23
In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.
A 256×256 low-light-level CMOS imaging sensor with digital CDS
NASA Astrophysics Data System (ADS)
Zou, Mei; Chen, Nan; Zhong, Shengyou; Li, Zhengfen; Zhang, Jicun; Yao, Li-bin
2016-10-01
In order to achieve high sensitivity for low-light-level CMOS image sensors (CIS), a capacitive transimpedance amplifier (CTIA) pixel circuit with a small integration capacitor is used. As the pixel and the column area are highly constrained, it is difficult to achieve analog correlated double sampling (CDS) to remove the noise for low-light-level CIS. So a digital CDS is adopted, which realizes the subtraction algorithm between the reset signal and pixel signal off-chip. The pixel reset noise and part of the column fixed-pattern noise (FPN) can be greatly reduced. A 256×256 CIS with CTIA array and digital CDS is implemented in the 0.35μm CMOS technology. The chip size is 7.7mm×6.75mm, and the pixel size is 15μm×15μm with a fill factor of 20.6%. The measured pixel noise is 24LSB with digital CDS in RMS value at dark condition, which shows 7.8× reduction compared to the image sensor without digital CDS. Running at 7fps, this low-light-level CIS can capture recognizable images with the illumination down to 0.1lux.
NASA Astrophysics Data System (ADS)
Lin, Shengmin; Lin, Chi-Pin; Wang, Weng-Lyang; Hsiao, Feng-Ke; Sikora, Robert
2009-08-01
A 256x512 element digital image sensor has been developed which has a large pixel size, slow scan and low power consumption for Hyper Spectral Imager (HySI) applications. The device is a mixed mode, silicon on chip (SOC) IC. It combines analog circuitry, digital circuitry and optical sensor circuitry into a single chip. This chip integrates a 256x512 active pixel sensor array, a programming gain amplifier (PGA) for row wise gain setting, I2C interface, SRAM, 12 bit analog to digital convertor (ADC), voltage regulator, low voltage differential signal (LVDS) and timing generator. The device can be used for 256 pixels of spatial resolution and 512 bands of spectral resolution ranged from 400 nm to 950 nm in wavelength. In row wise gain readout mode, one can set a different gain on each row of the photo detector by storing the gain setting data on the SRAM thru the I2C interface. This unique row wise gain setting can be used to compensate the silicon spectral response non-uniformity problem. Due to this unique function, the device is suitable for hyper-spectral imager applications. The HySI camera located on-board the Chandrayaan-1 satellite, was successfully launched to the moon on Oct. 22, 2008. The device is currently mapping the moon and sending back excellent images of the moon surface. The device design and the moon image data will be presented in the paper.
Processing, mosaicking and management of the Monterey Bay digital sidescan-sonar images
Chavez, P.S.; Isbrecht, J.; Galanis, P.; Gabel, G.L.; Sides, S.C.; Soltesz, D.L.; Ross, Stephanie L.; Velasco, M.G.
2002-01-01
Sidescan-sonar imaging systems with digital capabilities have now been available for approximately 20 years. In this paper we present several of the various digital image processing techniques developed by the U.S. Geological Survey (USGS) and used to apply intensity/radiometric and geometric corrections, as well as enhance and digitally mosaic, sidescan-sonar images of the Monterey Bay region. New software run by a WWW server was designed and implemented to allow very large image data sets, such as the digital mosaic, to be easily viewed interactively, including the ability to roam throughout the digital mosaic at the web site in either compressed or full 1-m resolution. The processing is separated into the two different stages: preprocessing and information extraction. In the preprocessing stage, sensor-specific algorithms are applied to correct for both geometric and intensity/radiometric distortions introduced by the sensor. This is followed by digital mosaicking of the track-line strips into quadrangle format which can be used as input to either visual or digital image analysis and interpretation. An automatic seam removal procedure was used in combination with an interactive digital feathering/stenciling procedure to help minimize tone or seam matching problems between image strips from adjacent track-lines. The sidescan-sonar image processing package is part of the USGS Mini Image Processing System (MIPS) and has been designed to process data collected by any 'generic' digital sidescan-sonar imaging system. The USGS MIPS software, developed over the last 20 years as a public domain package, is available on the WWW at: http://terraweb.wr.usgs.gov/trs/software.html.
CMOS image sensor for detection of interferon gamma protein interaction as a point-of-care approach.
Marimuthu, Mohana; Kandasamy, Karthikeyan; Ahn, Chang Geun; Sung, Gun Yong; Kim, Min-Gon; Kim, Sanghyo
2011-09-01
Complementary metal oxide semiconductor (CMOS)-based image sensors have received increased attention owing to the possibility of incorporating them into portable diagnostic devices. The present research examined the efficiency and sensitivity of a CMOS image sensor for the detection of antigen-antibody interactions involving interferon gamma protein without the aid of expensive instruments. The highest detection sensitivity of about 1 fg/ml primary antibody was achieved simply by a transmission mechanism. When photons are prevented from hitting the sensor surface, a reduction in digital output occurs in which the number of photons hitting the sensor surface is approximately proportional to the digital number. Nanoscale variation in substrate thickness after protein binding can be detected with high sensitivity by the CMOS image sensor. Therefore, this technique can be easily applied to smartphones or any clinical diagnostic devices for the detection of several biological entities, with high impact on the development of point-of-care applications.
CMOS image sensors as an efficient platform for glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo; Choi, Cheol Soo
2013-10-07
Complementary metal oxide semiconductor (CMOS) image sensors have been used previously in the analysis of biological samples. In the present study, a CMOS image sensor was used to monitor the concentration of oxidized mouse plasma glucose (86-322 mg dL(-1)) based on photon count variation. Measurement of the concentration of oxidized glucose was dependent on changes in color intensity; color intensity increased with increasing glucose concentration. The high color density of glucose highly prevented photons from passing through the polydimethylsiloxane (PDMS) chip, which suggests that the photon count was altered by color intensity. Photons were detected by a photodiode in the CMOS image sensor and converted to digital numbers by an analog to digital converter (ADC). Additionally, UV-spectral analysis and time-dependent photon analysis proved the efficiency of the detection system. This simple, effective, and consistent method for glucose measurement shows that CMOS image sensors are efficient devices for monitoring glucose in point-of-care applications.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
An analog gamma correction scheme for high dynamic range CMOS logarithmic image sensors.
Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi
2014-12-15
In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process.
An Analog Gamma Correction Scheme for High Dynamic Range CMOS Logarithmic Image Sensors
Cao, Yuan; Pan, Xiaofang; Zhao, Xiaojin; Wu, Huisi
2014-01-01
In this paper, a novel analog gamma correction scheme with a logarithmic image sensor dedicated to minimize the quantization noise of the high dynamic applications is presented. The proposed implementation exploits a non-linear voltage-controlled-oscillator (VCO) based analog-to-digital converter (ADC) to perform the gamma correction during the analog-to-digital conversion. As a result, the quantization noise does not increase while the same high dynamic range of logarithmic image sensor is preserved. Moreover, by combining the gamma correction with the analog-to-digital conversion, the silicon area and overall power consumption can be greatly reduced. The proposed gamma correction scheme is validated by the reported simulation results and the experimental results measured for our designed test structure, which is fabricated with 0.35 μm standard complementary-metal-oxide-semiconductor (CMOS) process. PMID:25517692
Advanced digital image archival system using MPEG technologies
NASA Astrophysics Data System (ADS)
Chang, Wo
2009-08-01
Digital information and records are vital to the human race regardless of the nationalities and eras in which they were produced. Digital image contents are produced at a rapid pace from cultural heritages via digitalization, scientific and experimental data via high speed imaging sensors, national defense satellite images from governments, medical and healthcare imaging records from hospitals, personal collection of photos from digital cameras. With these mass amounts of precious and irreplaceable data and knowledge, what standards technologies can be applied to preserve and yet provide an interoperable framework for accessing the data across varieties of systems and devices? This paper presents an advanced digital image archival system by applying the international standard of MPEG technologies to preserve digital image content.
Toward CMOS image sensor based glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo
2012-09-07
Complementary metal oxide semiconductor (CMOS) image sensor is a powerful tool for biosensing applications. In this present study, CMOS image sensor has been exploited for detecting glucose levels by simple photon count variation with high sensitivity. Various concentrations of glucose (100 mg dL(-1) to 1000 mg dL(-1)) were added onto a simple poly-dimethylsiloxane (PDMS) chip and the oxidation of glucose was catalyzed with the aid of an enzymatic reaction. Oxidized glucose produces a brown color with the help of chromogen during enzymatic reaction and the color density varies with the glucose concentration. Photons pass through the PDMS chip with varying color density and hit the sensor surface. Photon count was recognized by CMOS image sensor depending on the color density with respect to the glucose concentration and it was converted into digital form. By correlating the obtained digital results with glucose concentration it is possible to measure a wide range of blood glucose levels with great linearity based on CMOS image sensor and therefore this technique will promote a convenient point-of-care diagnosis.
Wave analysis of a plenoptic system and its applications
NASA Astrophysics Data System (ADS)
Shroff, Sapna A.; Berkner, Kathrin
2013-03-01
Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.
Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao
2014-11-17
A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.
Digital adaptive optics line-scanning confocal imaging system.
Liu, Changgeng; Kim, Myung K
2015-01-01
A digital adaptive optics line-scanning confocal imaging (DAOLCI) system is proposed by applying digital holographic adaptive optics to a digital form of line-scanning confocal imaging system. In DAOLCI, each line scan is recorded by a digital hologram, which allows access to the complex optical field from one slice of the sample through digital holography. This complex optical field contains both the information of one slice of the sample and the optical aberration of the system, thus allowing us to compensate for the effect of the optical aberration, which can be sensed by a complex guide star hologram. After numerical aberration compensation, the corrected optical fields of a sequence of line scans are stitched into the final corrected confocal image. In DAOLCI, a numerical slit is applied to realize the confocality at the sensor end. The width of this slit can be adjusted to control the image contrast and speckle noise for scattering samples. DAOLCI dispenses with the hardware pieces, such as Shack–Hartmann wavefront sensor and deformable mirror, and the closed-loop feedbacks adopted in the conventional adaptive optics confocal imaging system, thus reducing the optomechanical complexity and cost. Numerical simulations and proof-of-principle experiments are presented that demonstrate the feasibility of this idea.
Davis, Philip A.
2012-01-01
Airborne digital-image data were collected for the Arizona part of the Colorado River ecosystem below Glen Canyon Dam in 2009. These four-band image data are similar in wavelength band (blue, green, red, and near infrared) and spatial resolution (20 centimeters) to image collections of the river corridor in 2002 and 2005. These periodic image collections are used by the Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey to monitor the effects of Glen Canyon Dam operations on the downstream ecosystem. The 2009 collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits, unlike the image sensors that GCMRC used in 2002 and 2005. This study examined the performance of the SH52 sensor, on the basis of the collected image data, and determined that the SH52 sensor provided superior data relative to the previously employed sensors (that is, an early ADS40 model and Zeiss Imaging's Digital Mapping Camera) in terms of band-image registration, dynamic range, saturation, linearity to ground reflectance, and noise level. The 2009 image data were provided as orthorectified segments of each flightline to constrain the size of the image files; each river segment was covered by 5 to 6 overlapping, linear flightlines. Most flightline images for each river segment had some surface-smear defects and some river segments had cloud shadows, but these two conditions did not generally coincide in the majority of the overlapping flightlines for a particular river segment. Therefore, the final image mosaic for the 450-kilometer (km)-long river corridor required careful selection and editing of numerous flightline segments (a total of 513 segments, each 3.2 km long) to minimize surface defects and cloud shadows. The final image mosaic has a total of only 3 km of surface defects. The final image mosaic for the western end of the corridor has areas of cloud shadow because of persistent inclement weather during data collection. This report presents visual comparisons of the 2002, 2005, and 2009 digital-image mosaics for various physical, biological, and cultural resources within the Colorado River ecosystem. All of the comparisons show the superior quality of the 2009 image data. In fact, the 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River.
Radiographic endodontic working length estimation: comparison of three digital image receptors.
Athar, Anas; Angelopoulos, Christos; Katz, Jerald O; Williams, Karen B; Spencer, Paulette
2008-10-01
This in vitro study was conducted to evaluate the accuracy of the Schick wireless image receptor compared with 2 other types of digital image receptors for measuring the radiographic landmarks pertinent to endodontic treatment. Fourteen human cadaver mandibles with retained molars were selected. A fine endodontic file (#10) was introduced into the canal at random distances from the apex and at the apex of the tooth; images were made with 3 different #2-size image receptors: DenOptix storage phosphor plates, Gendex CCD sensor (wired), and Schick CDR sensor (wireless). Six raters viewed the images for identification of the radiographic apex of the tooth and the tip of a fine (#10) endodontic file. Inter-rater reliability was also assessed. Repeated-measures analysis of variance revealed a significant main effect for the type of image receptor. Raters' error in identifying structures of interest was significantly higher for Denoptix storage phosphor plates, whereas the least error was noted with the Schick CDR sensor. A significant interaction effect was observed for rater and type of image receptor used, but this effect contributed only 6% (P < .01; eta(2) = 0.06) toward the outcome of the results. Schick CDR wireless sensor may be preferable to other solid-state sensors, because there is no cable connecting the sensor to the computer. Further testing of this sensor for other diagnostic tasks is recommended, as well as evaluation of patient acceptance.
Active-Pixel Image Sensor With Analog-To-Digital Converters
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.
1995-01-01
Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.
Digital sun sensor multi-spot operation.
Rufino, Giancarlo; Grassi, Michele
2012-11-28
The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.
Toward a digital camera to rival the human eye
NASA Astrophysics Data System (ADS)
Skorka, Orit; Joseph, Dileepan
2011-07-01
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
NASA Astrophysics Data System (ADS)
Smith, Joseph; Marrs, Michael; Strnad, Mark; Apte, Raj B.; Bert, Julie; Allee, David; Colaneri, Nicholas; Forsythe, Eric; Morton, David
2013-05-01
Today's flat panel digital x-ray image sensors, which have been in production since the mid-1990s, are produced exclusively on glass substrates. While acceptable for use in a hospital or doctor's office, conventional glass substrate digital x-ray sensors are too fragile for use outside these controlled environments without extensive reinforcement. Reinforcement, however, significantly increases weight, bulk, and cost, making them impractical for far-forward remote diagnostic applications, which demand rugged and lightweight x-ray detectors. Additionally, glass substrate x-ray detectors are inherently rigid. This limits their use in curved or bendable, conformal x-ray imaging applications such as the non-destructive testing (NDT) of oil pipelines. However, by extending low-temperature thin-film transistor (TFT) technology previously demonstrated on plastic substrate- based electrophoretic and organic light emitting diode (OLED) flexible displays, it is now possible to manufacture durable, lightweight, as well as flexible digital x-ray detectors. In this paper, we discuss the principal technical approaches used to apply flexible display technology to two new large-area flexible digital x-ray sensors for defense, security, and industrial applications and demonstrate their imaging capabilities. Our results include a 4.8″ diagonal, 353 x 463 resolution, flexible digital x-ray detector, fabricated on a 6″ polyethylene naphthalate (PEN) plastic substrate; and a larger, 7.9″ diagonal, 720 x 640 resolution, flexible digital x-ray detector also fabricated on PEN and manufactured on a gen 2 (370 x 470 mm) substrate.
Encrypting Digital Camera with Automatic Encryption Key Deletion
NASA Technical Reports Server (NTRS)
Oakley, Ernest C. (Inventor)
2007-01-01
A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.
Integrated imaging sensor systems with CMOS active pixel sensor technology
NASA Technical Reports Server (NTRS)
Yang, G.; Cunningham, T.; Ortiz, M.; Heynssens, J.; Sun, C.; Hancock, B.; Seshadri, S.; Wrigley, C.; McCarty, K.; Pain, B.
2002-01-01
This paper discusses common approaches to CMOS APS technology, as well as specific results on the five-wire programmable digital camera-on-a-chip developed at JPL. The paper also reports recent research in the design, operation, and performance of APS imagers for several imager applications.
Dual-mode lensless imaging device for digital enzyme linked immunosorbent assay
NASA Astrophysics Data System (ADS)
Sasagawa, Kiyotaka; Kim, Soo Heyon; Miyazawa, Kazuya; Takehara, Hironari; Noda, Toshihiko; Tokuda, Takashi; Iino, Ryota; Noji, Hiroyuki; Ohta, Jun
2014-03-01
Digital enzyme linked immunosorbent assay (ELISA) is an ultra-sensitive technology for detecting biomarkers and viruses etc. As a conventional ELISA technique, a target molecule is bonded to an antibody with an enzyme by antigen-antibody reaction. In this technology, a femto-liter droplet chamber array is used as reaction chambers. Due to its small volume, the concentration of fluorescent product by single enzyme can be sufficient for detection by a fluorescent microscopy. In this work, we demonstrate a miniaturized lensless imaging device for digital ELISA by using a custom image sensor. The pixel array of the sensor is coated with a 20 μm-thick yellow filter to eliminate excitation light at 470 nm and covered by a fiber optic plate (FOP) to protect the sensor without resolution degradation. The droplet chamber array formed on a 50μm-thick glass plate is directly placed on the FOP. In the digital ELISA, microbeads coated with antibody are loaded into the droplet chamber array, and the ratio of the fluorescent to the non-fluorescent chambers with the microbeads are observed. In the fluorescence imaging, the spatial resolution is degraded by the spreading through the glass plate because the fluorescence is irradiated omnidirectionally. This degradation is compensated by image processing and the resolution of ~35 μm was achieved. In the bright field imaging, the projected images of the beads with collimated illumination are observed. By varying the incident angle and image composition, microbeads were successfully imaged.
Qiu, Kang-Fu
2017-01-01
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2–2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking. PMID:29027950
Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P
2017-10-13
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.
Image Sensors Enhance Camera Technologies
NASA Technical Reports Server (NTRS)
2010-01-01
In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
Teich, Sorin; Al-Rawi, Wisam; Heima, Masahiro; Faddoul, Fady F; Goldzweig, Gil; Gutmacher, Zvi; Aizenbud, Dror
2016-10-01
To evaluate the image quality generated by eight commercially available intraoral sensors. Eighteen clinicians ranked the quality of a bitewing acquired from one subject using eight different intraoral sensors. Analytical methods used to evaluate clinical image quality included the Visual Grading Characteristics method, which helps to quantify subjective opinions to make them suitable for analysis. The Dexis sensor was ranked significantly better than Sirona and Carestream-Kodak sensors; and the image captured using the Carestream-Kodak sensor was ranked significantly worse than those captured using Dexis, Schick and Cyber Medical Imaging sensors. The Image Works sensor image was rated the lowest by all clinicians. Other comparisons resulted in non-significant results. None of the sensors was considered to generate images of significantly better quality than the other sensors tested. Further research should be directed towards determining the clinical significance of the differences in image quality reported in this study. © 2016 FDI World Dental Federation.
High-Speed Binary-Output Image Sensor
NASA Technical Reports Server (NTRS)
Fossum, Eric; Panicacci, Roger A.; Kemeny, Sabrina E.; Jones, Peter D.
1996-01-01
Photodetector outputs digitized by circuitry on same integrated-circuit chip. Developmental special-purpose binary-output image sensor designed to capture up to 1,000 images per second, with resolution greater than 10 to the 6th power pixels per image. Lower-resolution but higher-frame-rate prototype of sensor contains 128 x 128 array of photodiodes on complementary metal oxide/semiconductor (CMOS) integrated-circuit chip. In application for which it is being developed, sensor used to examine helicopter oil to determine whether amount of metal and sand in oil sufficient to warrant replacement.
Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.
He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P
2013-09-18
The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.
Precise color images a high-speed color video camera system with three intensified sensors
NASA Astrophysics Data System (ADS)
Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.
1999-06-01
High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.
Image sensor for testing refractive error of eyes
NASA Astrophysics Data System (ADS)
Li, Xiangning; Chen, Jiabi; Xu, Longyun
2000-05-01
It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.
Geiger-Mode Avalanche Photodiode Arrays Integrated to All-Digital CMOS Circuits.
Aull, Brian
2016-04-08
This article reviews MIT Lincoln Laboratory's work over the past 20 years to develop photon-sensitive image sensors based on arrays of silicon Geiger-mode avalanche photodiodes. Integration of these detectors to all-digital CMOS readout circuits enable exquisitely sensitive solid-state imagers for lidar, wavefront sensing, and passive imaging.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
NASA Astrophysics Data System (ADS)
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2017-03-01
Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.
Vision communications based on LED array and imaging sensor
NASA Astrophysics Data System (ADS)
Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.
Digital Image Processing Overview For Helmet Mounted Displays
NASA Astrophysics Data System (ADS)
Parise, Michael J.
1989-09-01
Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.
Radiometry simulation within the end-to-end simulation tool SENSOR
NASA Astrophysics Data System (ADS)
Wiest, Lorenz; Boerner, Anko
2001-02-01
12 An end-to-end simulation is a valuable tool for sensor system design, development, optimization, testing, and calibration. This contribution describes the radiometry module of the end-to-end simulation tool SENSOR. It features MODTRAN 4.0-based look up tables in conjunction with a cache-based multilinear interpolation algorithm to speed up radiometry calculations. It employs a linear reflectance parameterization to reduce look up table size, considers effects due to the topology of a digital elevation model (surface slope, sky view factor) and uses a reflectance class feature map to assign Lambertian and BRDF reflectance properties to the digital elevation model. The overall consistency of the radiometry part is demonstrated by good agreement between ATCOR 4-retrieved reflectance spectra of a simulated digital image cube and the original reflectance spectra used to simulate this image data cube.
Low Power Camera-on-a-Chip Using CMOS Active Pixel Sensor Technology
NASA Technical Reports Server (NTRS)
Fossum, E. R.
1995-01-01
A second generation image sensor technology has been developed at the NASA Jet Propulsion Laboratory as a result of the continuing need to miniaturize space science imaging instruments. Implemented using standard CMOS, the active pixel sensor (APS) technology permits the integration of the detector array with on-chip timing, control and signal chain electronics, including analog-to-digital conversion.
Mori, Yutaka; Nomura, Takanori
2013-06-01
In holographic displays, it is undesirable to observe the speckle noises with the reconstructed images. A method for improvement of reconstructed image quality by synthesizing low-coherence digital holograms is proposed. It is possible to obtain speckleless reconstruction of holograms due to low-coherence digital holography. An image sensor records low-coherence digital holograms, and the holograms are synthesized by computational calculation. Two approaches, the threshold-processing and the picking-a-peak methods, are proposed in order to reduce random noise of low-coherence digital holograms. The reconstructed image quality by the proposed methods is compared with the case of high-coherence digital holography. Quantitative evaluation is given to confirm the proposed methods. In addition, the visual evaluation by 15 people is also shown.
A sprayable luminescent pH sensor and its use for wound imaging in vivo.
Schreml, Stephan; Meier, Robert J; Weiß, Katharina T; Cattani, Julia; Flittner, Dagmar; Gehmert, Sebastian; Wolfbeis, Otto S; Landthaler, Michael; Babilas, Philipp
2012-12-01
Non-invasive luminescence imaging is of great interest for studying biological parameters in wound healing, tumors and other biomedical fields. Recently, we developed the first method for 2D luminescence imaging of pH in vivo on humans, and a novel method for one-stop-shop visualization of oxygen and pH using the RGB read-out of digital cameras. Both methods make use of semitransparent sensor foils. Here, we describe a sprayable ratiometric luminescent pH sensor, which combines properties of both these methods. Additionally, a major advantage is that the sensor spray is applicable to very uneven tissue surfaces due to its consistency. A digital RGB image of the spray on tissue is taken. The signal of the pH indicator (fluorescein isothiocyanate) is stored in the green channel (G), while that of the reference dye [ruthenium(II)-tris-(4,7-diphenyl-1,10-phenanthroline)] is stored in the red channel (R). Images are processed by rationing luminescence intensities (G/R) to result in pseudocolor pH maps of tissues, e.g. wounds. © 2012 John Wiley & Sons A/S.
NASA Technical Reports Server (NTRS)
Schott, John; Gerace, Aaron; Brown, Scott; Gartley, Michael; Montanaro, Matthew; Reuter, Dennis C.
2012-01-01
The next Landsat satellite, which is scheduled for launch in early 2013, will carry two instruments: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). Significant design changes over previous Landsat instruments have been made to these sensors to potentially enhance the quality of Landsat image data. TIRS, which is the focus of this study, is a dual-band instrument that uses a push-broom style architecture to collect data. To help understand the impact of design trades during instrument build, an effort was initiated to model TIRS imagery. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool was used to produce synthetic "on-orbit" TIRS data with detailed radiometric, geometric, and digital image characteristics. This work presents several studies that used DIRSIG simulated TIRS data to test the impact of engineering performance data on image quality in an effort to determine if the image data meet specifications or, in the event that they do not, to determine if the resulting image data are still acceptable.
Image interpolation and denoising for division of focal plane sensors using Gaussian processes.
Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor
2014-06-16
Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.
High-speed line-scan camera with digital time delay integration
NASA Astrophysics Data System (ADS)
Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.
A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer
Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie
2014-01-01
Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Geiger-Mode Avalanche Photodiode Arrays Integrated to All-Digital CMOS Circuits
2016-01-20
Figure 7 4×4 GMAPD array wire bonded to CMOS timing circuits Figure 8 Low‐fill‐factor APD design used in lidar sensors The APD doping...epitaxial growth and the pixels are isolated by mesa etch. 128×32 lidar image sensors were built by bump bonding the APD arrays to a CMOS timing...passive image sensor with this large a format based on hybridization of a GMAPD array to a CMOS readout. Fig. 14 shows one of the first images taken
Giga-pixel lensfree holographic microscopy and tomography using color image sensors.
Isikman, Serhan O; Greenbaum, Alon; Luo, Wei; Coskun, Ahmet F; Ozcan, Aydogan
2012-01-01
We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ~350 nm lateral resolution, corresponding to a numerical aperture of ~0.8, across a field-of-view of ~20.5 mm(2). This constitutes a digital image with ~0.7 Billion effective pixels in both amplitude and phase channels (i.e., ~1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ± 50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ~0.35 µm × 0.35 µm × ~2 µm, in x, y and z, respectively, creating an effective voxel size of ~0.03 µm(3) across a sample volume of ~5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.
Digital imaging for dental caries.
Wenzel, A
2000-04-01
Laboratory studies show that digital intraoral radiography systems are as accurate as dental film for the detection of caries when a good-quality image is obtained, although more re-takes might be necessary because of positioning errors with the digital systems, particularly the charge-coupled device sensors. The phosphor plate is more comfortable for the patient than nondigital systems, and the dose can be further reduced with the storage phosphors. Cross-contamination does not pose a problem with digital systems if simple hygiene procedures are observed.
NASA Astrophysics Data System (ADS)
Li, Liang; Chen, Zhiqiang; Zhao, Ziran; Wu, Dufan
2013-01-01
At present, there are mainly three x-ray imaging modalities for dental clinical diagnosis: radiography, panorama and computed tomography (CT). We develop a new x-ray digital intra-oral tomosynthesis (IDT) system for quasi-three-dimensional dental imaging which can be seen as an intermediate modality between traditional radiography and CT. In addition to normal x-ray tube and digital sensor used in intra-oral radiography, IDT has a specially designed mechanical device to complete the tomosynthesis data acquisition. During the scanning, the measurement geometry is such that the sensor is stationary inside the patient's mouth and the x-ray tube moves along an arc trajectory with respect to the intra-oral sensor. Therefore, the projection geometry can be obtained without any other reference objects, which makes it be easily accepted in clinical applications. We also present a compressed sensing-based iterative reconstruction algorithm for this kind of intra-oral tomosynthesis. Finally, simulation and experiment were both carried out to evaluate this intra-oral imaging modality and algorithm. The results show that IDT has its potentiality to become a new tool for dental clinical diagnosis.
Electro-optical imaging systems integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wight, R.
1987-01-01
Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less
Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn
2017-06-25
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.
Digital Intraoral Imaging Re-Exposure Rates of Dental Students.
Senior, Anthea; Winand, Curtis; Ganatra, Seema; Lai, Hollis; Alsulfyani, Noura; Pachêco-Pereira, Camila
2018-01-01
A guiding principle of radiation safety is ensuring that radiation dosage is as low as possible while yielding the necessary diagnostic information. Intraoral images taken with conventional dental film have a higher re-exposure rate when taken by dental students compared to experienced staff. The aim of this study was to examine the prevalence of and reasons for re-exposure of digital intraoral images taken by third- and fourth-year dental students in a dental school clinic. At one dental school in Canada, the total number of intraoral images taken by third- and fourth-year dental students, re-exposures, and error descriptions were extracted from patient clinical records for an eight-month period (September 2015 to April 2016). The data were categorized to distinguish between digital images taken with solid-state sensors or photostimulable phosphor plates (PSP). The results showed that 9,397 intraoral images were made, and 1,064 required re-exposure. The most common error requiring re-exposure for bitewing images was an error in placement of the receptor too far mesially or distally (29% for sensors and 18% for PSP). The most common error requiring re-exposure for periapical images was inadequate capture of the periapical area (37% for sensors and 6% for PSP). A retake rate of 11% was calculated, and the common technique errors causing image deficiencies were identified. Educational intervention can now be specifically designed to reduce the retake rate and radiation dose for future patients.
Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications
NASA Technical Reports Server (NTRS)
Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Staller, C.; Zhou, Z;
1994-01-01
JPL, under sponsorship from the NASA Office of Advanced Concepts and Technology, has been developing a second-generation solid-state image sensor technology. Charge-coupled devices (CCD) are a well-established first generation image sensor technology. For both commercial and NASA applications, CCDs have numerous shortcomings. In response, the active pixel sensor (APS) technology has been under research. The major advantages of APS technology are the ability to integrate on-chip timing, control, signal-processing and analog-to-digital converter functions, reduced sensitivity to radiation effects, low power operation, and random access readout.
Real-time DNA Amplification and Detection System Based on a CMOS Image Sensor.
Wang, Tiantian; Devadhasan, Jasmine Pramila; Lee, Do Young; Kim, Sanghyo
2016-01-01
In the present study, we developed a polypropylene well-integrated complementary metal oxide semiconductor (CMOS) platform to perform the loop mediated isothermal amplification (LAMP) technique for real-time DNA amplification and detection simultaneously. An amplification-coupled detection system directly measures the photon number changes based on the generation of magnesium pyrophosphate and color changes. The photon number decreases during the amplification process. The CMOS image sensor observes the photons and converts into digital units with the aid of an analog-to-digital converter (ADC). In addition, UV-spectral studies, optical color intensity detection, pH analysis, and electrophoresis detection were carried out to prove the efficiency of the CMOS sensor based the LAMP system. Moreover, Clostridium perfringens was utilized as proof-of-concept detection for the new system. We anticipate that this CMOS image sensor-based LAMP method will enable the creation of cost-effective, label-free, optical, real-time and portable molecular diagnostic devices.
Development of CMOS Active Pixel Image Sensors for Low Cost Commercial Applications
NASA Technical Reports Server (NTRS)
Fossum, E.; Gee, R.; Kemeny, S.; Kim, Q.; Mendis, S.; Nakamura, J.; Nixon, R.; Ortiz, M.; Pain, B.; Zhou, Z.;
1994-01-01
This paper describes ongoing research and development of CMOS active pixel image sensors for low cost commercial applications. A number of sensor designs have been fabricated and tested in both p-well and n-well technologies. Major elements in the development of the sensor include on-chip analog signal processing circuits for the reduction of fixed pattern noise, on-chip timing and control circuits and on-chip analog-to-digital conversion (ADC). Recent results and continuing efforts in these areas will be presented.
How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.
NASA Astrophysics Data System (ADS)
Wei, Minsong; Xing, Fei; You, Zheng
2017-01-01
The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.
CMOS cassette for digital upgrade of film-based mammography systems
NASA Astrophysics Data System (ADS)
Baysal, Mehmet A.; Toker, Emre
2006-03-01
While full-field digital mammography (FFDM) technology is gaining clinical acceptance, the overwhelming majority (96%) of the installed base of mammography systems are conventional film-screen (FSM) systems. A high performance, and economical digital cassette based product to conveniently upgrade FSM systems to FFDM would accelerate the adoption of FFDM, and make the clinical and technical advantages of FFDM available to a larger population of women. The planned FFDM cassette is based on our commercial Digital Radiography (DR) cassette for 10 cm x 10 cm field-of-view spot imaging and specimen radiography, utilizing a 150 micron columnar CsI(Tl) scintillator and 48 micron active-pixel CMOS sensor modules. Unlike a Computer Radiography (CR) cassette, which requires an external digitizer, our DR cassette transfers acquired images to a display workstation within approximately 5 seconds of exposure, greatly enhancing patient flow. We will present the physical performance of our prototype system against other FFDM systems in clinical use today, using established objective criteria such as the Modulation Transfer Function (MTF), Detective Quantum Efficiency (DQE), and subjective criteria, such as a contrast-detail (CD-MAM) observer performance study. Driven by the strong demand from the computer industry, CMOS technology is one of the lowest cost, and the most readily accessible technologies available for FFDM today. Recent popular use of CMOS imagers in high-end consumer cameras have also resulted in significant advances in the imaging performance of CMOS sensors against rivaling CCD sensors. This study promises to take advantage of these unique features to develop the first CMOS based FFDM upgrade cassette.
ERIC Educational Resources Information Center
Lancor, Rachael; Lancor, Brian
2014-01-01
In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…
Image restoration techniques as applied to Landsat MSS and TM data
Meyer, David
1987-01-01
Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.
Forensic use of photo response non-uniformity of imaging sensors and a counter method.
Dirik, Ahmet Emir; Karaküçük, Ahmet
2014-01-13
Analogous to use of bullet scratches in forensic science, the authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, photo-response non-uniformity noise (PRNU) has been used in source camera identification (SCI). However, this technique can be used maliciously to track or inculpate innocent people. To impede such tracking, PRNU noise should be suppressed significantly. Based on this motivation, we propose a counter forensic method to deceive SCI. Experimental results show that it is possible to impede PRNU-based camera identification for various imaging sensors while preserving the image quality.
32 CFR 813.2 - Sources of VIDOC.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...
32 CFR 813.2 - Sources of VIDOC.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...
32 CFR 813.2 - Sources of VIDOC.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...
32 CFR 813.2 - Sources of VIDOC.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...
32 CFR 813.2 - Sources of VIDOC.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Air Digital Recorder (ADR) images from airborne imagery systems, such as heads up displays, radar scopes, and images from electro-optical sensors carried aboard aircraft and weapons systems. (e...
NASA Technical Reports Server (NTRS)
Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel
2016-01-01
Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed..
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
NASA Astrophysics Data System (ADS)
Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel
2016-09-01
Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed.
Research on coding and decoding method for digital levels.
Tu, Li-fen; Zhong, Si-dong
2011-01-20
A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.
DMSP Special Sensor Microwave/Imager Calibration/Validation. Volume 1
1990-01-01
each channel samples the hot load on every scan and commands a gain change up when the hot load is below 7/16th of the analog to digital converter range...OLS imagery. A threshold blanking technique was used to convert the manual analyses into synthetic digital images containing the cloud truth...should include OLS digital thermal infrared into the analysis. While this will be of use only in clear, relatively dry atmospheric conditions, the
Vandenberghe, Bart; Corpas, Livia; Bosmans, Hilde; Yang, Jie; Jacobs, Reinhilde
2011-08-01
The aim of this study was the determination of image accuracy and quality for periodontal diagnosis using various X-ray generators with conventional and digital radiographs. Thirty-one in vitro periodontal defects were evaluated on intraoral conventional (E-, F/E-speed) and digital images (three indirect, two direct sensors). Standardised radiographs were made with an alternating current (AC), a high-frequency (HF) and a direct current (DC) X-ray unit at rising exposure times (20-160 ms with 20-ms interval) with a constant kV of 70. Three observers assessed bone levels for comparison to the gold standard. Lamina dura, contrast, trabecularisation, crater and furcation involvements were evaluated. Irrespective X-ray generator-type, measurement deviations increased at higher exposure times for solid-state, but decreased for photostimulable storage phosphor (PSP) systems. Accuracy for HF or DC was significantly higher than AC (p < 0.0001), especially at low exposure times. At 0.5- to 1-mm clinical deviation, 27-53% and 32-55% dose savings were demonstrated when using HF or DC generators compared to AC, but only for PSP. No savings were found for solid-state sensors, indicating their higher sensitivity. The use of digital sensors compared to film allowed 15-90% dose savings using the AC tube, whilst solid-state sensors allowed approximately 50% savings compared to PSP, depending on tube type and threshold level.. Accuracy of periodontal diagnosis increases when using HF or DC generators and/or digital receptors with adequate diagnostic information at lower exposure times.
Accurate and cost-effective MTF measurement system for lens modules of digital cameras
NASA Astrophysics Data System (ADS)
Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu
2007-01-01
For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.
Along-Track Reef Imaging System (ATRIS)
Brock, John; Zawada, Dave
2006-01-01
"Along-Track Reef Imaging System (ATRIS)" describes the U.S. Geological Survey's Along-Track Reef Imaging System, a boat-based sensor package for rapidly mapping shallow water benthic environments. ATRIS acquires high resolution, color digital images that are accurately geo-located in real-time.
Devadhasan, Jasmine Pramila; Kim, Sanghyo
2015-02-09
CMOS sensors are becoming a powerful tool in the biological and chemical field. In this work, we introduce a new approach on quantifying various pH solutions with a CMOS image sensor. The CMOS image sensor based pH measurement produces high-accuracy analysis, making it a truly portable and user friendly system. pH indicator blended hydrogel matrix was fabricated as a thin film to the accurate color development. A distinct color change of red, green and blue (RGB) develops in the hydrogel film by applying various pH solutions (pH 1-14). The semi-quantitative pH evolution was acquired by visual read out. Further, CMOS image sensor absorbs the RGB color intensity of the film and hue value converted into digital numbers with the aid of an analog-to-digital converter (ADC) to determine the pH ranges of solutions. Chromaticity diagram and Euclidean distance represent the RGB color space and differentiation of pH ranges, respectively. This technique is applicable to sense the various toxic chemicals and chemical vapors by situ sensing. Ultimately, the entire approach can be integrated into smartphone and operable with the user friendly manner. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.
2016-05-01
The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.
Full-field acoustomammography using an acousto-optic sensor.
Sandhu, J S; Schmidt, R A; La Rivière, P J
2009-06-01
In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed.
Full-field acoustomammography using an acousto-optic sensor
Sandhu, J. S.; Schmidt, R. A.; La Rivière, P. J.
2009-01-01
In this Letter the authors introduce a wide-field transmission ultrasound approach to breast imaging based on the use of a large area acousto-optic (AO) sensor. Accompanied by a suitable acoustic source, such a detector could be mounted on a traditional mammography system and provide a mammographylike ultrasound projection image of the compressed breast in registration with the x-ray mammogram. The authors call the approach acoustography. The hope is that this additional information could improve the sensitivity and specificity of screening mammography. The AO sensor converts ultrasound directly into a visual image by virtue of the acousto-optic effect of the liquid crystal layer contained in the AO sensor. The image is captured with a digital video camera for processing, analysis, and storage. In this Letter, the authors perform a geometrical resolution analysis and also present images of a multimodality breast phantom imaged with both mammography and acoustography to demonstrate the feasibility of the approach. The geometric resolution analysis suggests that the technique could readily detect tumors of diameter of 3 mm using 8.5 MHz ultrasound, with smaller tumors detectable with higher frequency ultrasound, though depth penetration might then become a limiting factor. The preliminary phantom images show high contrast and compare favorably to digital mammograms of the same phantom. The authors have introduced and established, through phantom imaging, the feasibility of a full-field transmission ultrasound detector for breast imaging based on the use of a large area AO sensor. Of course variations in attenuation of connective, glandular, and fatty tissues will lead to images with more cluttered anatomical background than those of the phantom imaged here. Acoustic coupling to the mammographically compressed breast, particularly at the margins, will also have to be addressed. PMID:19610321
Sajjadi, Seyed Hadi; Khosravanifard, Behnam; Moazzami, Fatemeh; Rakhshan, Vahid; Esmaeilpour, Mozhgan
2016-12-01
The effect of image quality or dental specialties on the subjective judgment of facial beauty has not been evaluated in any study. This study assessed the effect of digital sensors and specialties on the perception of smile beauty. In the first phase of this double-blind clinical trial, 40 female smile photographs (taken from dental students) were evaluated by a panel of three prosthodontists, six orthodontists, and three specialists in restorative dentistry to select the most beautiful smiles. In the second phase, the 20 students having the most appealing smiles were again photographed in standard conditions, but this time with three different digital sensors: full-frame 21.1-megapixel, half-frame 18.0-megapixel, and compact 10.4-megapixel. The same panel judged smile beauty on a visual analog scale. The referees were blinded to the type of sensors, and the images were all coded. The data were analyzed using two-way ANOVA, Kruskal-Wallis, and Mann-Whitney U tests (α = 0.05 and 0.0167). The mean scores for full-frame, half-frame, and compact sensors were 6.70 ± 1.30, 4.56 ± 1.29, and 4.40 ± 1.39 [out of 10], respectively (Kruskal-Wallis p < 0.0001). The differences between the full-frame and the other sensors were statistically significant (Mann-Whitney p < 0.01); however, the difference between the half-frame and compact sensors was not statistically significant (p > 0.1). Sensors (ANOVA p < 0.00001) but not specialties (p = 0.687) affected the perception of beauty. According to the results of this study, image quality affected the perception of smile beauty. The full-frame sensor produced consistently better results and was recommended over half-frame and compact sensors. Dentists of different specialties might have similar standards of smile beauty, although this needs further assessment. © 2015 by the American College of Prosthodontists.
Image acquisition system using on sensor compressed sampling technique
NASA Astrophysics Data System (ADS)
Gupta, Pravir Singh; Choi, Gwan Seong
2018-01-01
Advances in CMOS technology have made high-resolution image sensors possible. These image sensors pose significant challenges in terms of the amount of raw data generated, energy efficiency, and frame rate. This paper presents a design methodology for an imaging system and a simplified image sensor pixel design to be used in the system so that the compressed sensing (CS) technique can be implemented easily at the sensor level. This results in significant energy savings as it not only cuts the raw data rate but also reduces transistor count per pixel; decreases pixel size; increases fill factor; simplifies analog-to-digital converter, JPEG encoder, and JPEG decoder design; decreases wiring; and reduces the decoder size by half. Thus, CS has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23% to 65%.
Practical design and evaluation methods of omnidirectional vision sensors
NASA Astrophysics Data System (ADS)
Ohte, Akira; Tsuzuki, Osamu
2012-01-01
A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.
NASA Astrophysics Data System (ADS)
Cha, B. K.; Kim, J. Y.; Kim, Y. J.; Yun, S.; Cho, G.; Kim, H. K.; Seo, C.-W.; Jeon, S.; Huh, Y.
2012-04-01
In digital X-ray imaging systems, X-ray imaging detectors based on scintillating screens with electronic devices such as charge-coupled devices (CCDs), thin-film transistors (TFT), complementary metal oxide semiconductor (CMOS) flat panel imagers have been introduced for general radiography, dental, mammography and non-destructive testing (NDT) applications. Recently, a large-area CMOS active-pixel sensor (APS) in combination with scintillation films has been widely used in a variety of digital X-ray imaging applications. We employed a scintillator-based CMOS APS image sensor for high-resolution mammography. In this work, both powder-type Gd2O2S:Tb and a columnar structured CsI:Tl scintillation screens with various thicknesses were fabricated and used as materials to convert X-ray into visible light. These scintillating screens were directly coupled to a CMOS flat panel imager with a 25 × 50 mm2 active area and a 48 μm pixel pitch for high spatial resolution acquisition. We used a W/Al mammographic X-ray source with a 30 kVp energy condition. The imaging characterization of the X-ray detector was measured and analyzed in terms of linearity in incident X-ray dose, modulation transfer function (MTF), noise-power spectrum (NPS) and detective quantum efficiency (DQE).
Digital image registration method based upon binary boundary maps
NASA Technical Reports Server (NTRS)
Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.
1974-01-01
A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.
NASA Technical Reports Server (NTRS)
1999-01-01
Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.
A new photoconductor imaging system for digital radiography.
de Monts, H; Beaumont, F
1989-01-01
Amorphous selenium is a material often used in the x-ray imaging system. The main application is in xeroradiography where the structure of the sensor is a layer of selenium on a conductive substrate. The signal is a charge density on the surface which is revealed by a toner or by electrostatic probe for digitalization. In the system described here, the sensor structure is different for the sensor is covered by an electrode, a thin layer of metal, which gives another interface. The reading system needs the scanning of a light beam and the resolution power depends on the size of the beam. It is easier to scan a light beam than electrostatic probes so a more compact system can be realized. In the process, there are two phases: the storage and the reading. The time spent between the two phases reduces the quality of the image, and an in situ reading system, integrated to the radiographic machine will be, for this reason, more efficient. Also, the sensor needs good memory effect. One has investigated different sensors based on a structure of a thin photoconductive layer between two electrodes to find a memory effect. We have already seen this phenomena in the Bi12 SiO20 (B. Richard, "Contribution à l'étude d'un procédé d'imagerie radiologique utilisant le photoconducteur BO12 SiO20," Ph.D. thesis, Paris, 1987). In amorphous selenium with some dopants and some type of metallic contact, the memory effect is important enough to realize a system. With 2 X 2 cm samples, a complete x-ray digital imaging system has been built.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Technical Reports Server (NTRS)
Farrokh, Babak; Rahim, Nur Aida Abul; Segal, Ken; Fan, Terry; Jones, Justin; Hodges, Ken; Mashni, Noah; Garg, Naman; Sang, Alex
2013-01-01
Three distinct strain measurement methods (i.e., foil resistance strain gages, fiber optic strain sensors, and a three-dimensional digital image photogrammetry that gives full field strain and displacement measurements) were implemented to measure strains on the back and front surfaces of a longitudinally jointed curved test article subjected to edge-wise compression testing, at NASA Goddard Space Flight Center, according to ASTM C364. The pre-test finite element analysis (FEA) was conducted to assess ultimate failure load and predict strain distribution pattern throughout the test coupon. The predicted strain pattern contours were then utilized as guidelines for installing the strain measurement instrumentations. The foil resistance strain gages and fiber optic strain sensors were bonded on the specimen at locations with nearly the same analytically predicted strain values, and as close as possible to each other, so that, comparisons between the measured strains by strain gages and fiber optic sensors, as well as the three-dimensional digital image photogrammetric system are relevant. The test article was loaded to failure (at 167 kN), at the compressive strain value of 10,000 micro epsilon. As a part of this study, the validity of the measured strains by fiber optic sensors is examined against the foil resistance strain gages and the three-dimensional digital image photogrammetric data, and comprehensive comparisons are made with FEA predictions.
Integration of aerial remote sensing imaging data in a 3D-GIS environment
NASA Astrophysics Data System (ADS)
Moeller, Matthias S.
2003-03-01
For some years sensor systems have been available providing digital images of a new quality. Especially aerial stereo scanners acquire digital multispectral images with an extremely high ground resolution of about 0.10 - 0.15m and provide in addition a Digital Surface Models (DSM). These imaging products both can be used for a detailed monitoring at scales up to 1:500. The processed georeferenced multispectral orthoimages can be readily integrated into GIS making them useful for a number of applications. The DSM, derived from forward and backward facing sensors of an aerial imaging system provides a ground resolution of 0.5 m and can be used for 3D visualization purposes. In some cases it is essential, to store the ground elevation as a Digital Terrain Model (DTM) and also the height of 3-dimensional objects in a separated database. Existing automated algorithms do not work precise for the extraction of DTM from aerial scanner DSM. This paper presents a new approach which combines the visible image data and the DSM data for the generation of DTM with a reliable geometric accuracy. Already existing cadastral data can be used as a knowledge base for the extraction of building heights in cities. These elevation data is the essential source for a GIS based urban information system with a 3D visualization component.
Coded aperture detector: an image sensor with sub 20-nm pixel resolution.
Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick
2014-08-11
We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.
A CMOS One-chip Wireless Camera with Digital Image Transmission Function for Capsule Endoscopes
NASA Astrophysics Data System (ADS)
Itoh, Shinya; Kawahito, Shoji; Terakawa, Susumu
This paper presents the design and implementation of a one-chip camera device for capsule endoscopes. This experimental chip integrates functional circuits required for capsule endoscopes and digital image transmission function. The integrated functional blocks include an image array, a timing generator, a clock generator, a voltage regulator, a 10b cyclic A/D converter, and a BPSK modulator. It can be operated autonomously with 3 pins (VDD, GND, and DATAOUT). A prototype image sensor chip which has 320x240 effective pixels was fabricated using 0.25μm CMOS image sensor process and the autonomous imaging was demonstrated. The chip size is 4.84mmx4.34mm. With a 2.0 V power supply, the analog part consumes 950μW and the total power consumption at 2 frames per second (fps) is 2.6mW. Error-free image transmission over a distance of 48cm at 2.5Mbps corresponding to 2fps has been succeeded with inductive coupling.
Overview of benefits, challenges, and requirements of wheeled-vehicle mounted infrared sensors
NASA Astrophysics Data System (ADS)
Miller, John Lester; Clayton, Paul; Olsson, Stefan F.
2013-06-01
Requirements for vehicle mounted infrared sensors, especially as imagers evolve to high definition (HD) format will be detailed and analyzed. Lessons learned from integrations of infrared sensors on armored vehicles, unarmored military vehicles and commercial automobiles will be discussed. Comparisons between sensors for driving and those for situation awareness, targeting and other functions will be presented. Conclusions will be drawn regarding future applications and installations. New business requirements for more advanced digital image processing algorithms in the sensor system will be discussed. Examples of these are smarter contrast/brightness adjustments algorithms, detail enhancement, intelligent blending (IR-Vis) modes, and augmented reality.
IR sensors and imagers in networked operations
NASA Astrophysics Data System (ADS)
Breiter, Rainer; Cabanski, Wolfgang
2005-05-01
"Network-centric Warfare" is a common slogan describing an overall concept of networked operation of sensors, information and weapons to gain command and control superiority. Referring to IR sensors, integration and fusion of different channels like day/night or SAR images or the ability to spread image data among various users are typical requirements. Looking for concrete implementations the German Army future infantryman IdZ is an example where a group of ten soldiers build a unit with every soldier equipped with a personal digital assistant (PDA) for information display, day photo camera and a high performance thermal imager for every unit. The challenge to allow networked operation among such a unit is bringing information together and distribution over a capable network. So also AIM's thermal reconnaissance and targeting sight HuntIR which was selected for the IdZ program provides this capabilities by an optional wireless interface. Besides the global approach of Network-centric Warfare network technology can also be an interesting solution for digital image data distribution and signal processing behind the FPA replacing analog video networks or specific point to point interfaces. The resulting architecture can provide capabilities of data fusion from e.g. IR dual-band or IR multicolor sensors. AIM has participated in a German/UK collaboration program to produce a demonstrator for day/IR video distribution via Gigabit Ethernet for vehicle applications. In this study Ethernet technology was chosen for network implementation and a set of electronics was developed for capturing video data of IR and day imagers and Gigabit Ethernet video distribution. The demonstrator setup follows the requirements of current and future vehicles having a set of day and night imager cameras and a crew station with several members. Replacing the analog video path by a digital video network also makes it easy to implement embedded training by simply feeding the network with simulation data. The paper addresses the special capabilities, requirements and design considerations of IR sensors and imagers in applications like thermal weapon sights and UAVs for networked operating infantry forces.
2011-09-01
Sensor ..........................................................................25 2. The Environment for Visualizing Images 4.7 (ENVI......DEM Digital Elevation Model ENVI Environment for Visualizing Images HADR Humanitarian and Disaster Relief IfSAR Interferometric Synthetic Aperture
Design of a Low-Light-Level Image Sensor with On-Chip Sigma-Delta Analog-to- Digital Conversion
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.; Fossum, Eric R.
1993-01-01
The design and projected performance of a low-light-level active-pixel-sensor (APS) chip with semi-parallel analog-to-digital (A/D) conversion is presented. The individual elements have been fabricated and tested using MOSIS* 2 micrometer CMOS technology, although the integrated system has not yet been fabricated. The imager consists of a 128 x 128 array of active pixels at a 50 micrometer pitch. Each column of pixels shares a 10-bit A/D converter based on first-order oversampled sigma-delta (Sigma-Delta) modulation. The 10-bit outputs of each converter are multiplexed and read out through a single set of outputs. A semi-parallel architecture is chosen to achieve 30 frames/second operation even at low light levels. The sensor is designed for less than 12 e^- rms noise performance.
Control Software for Advanced Video Guidance Sensor
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.
2006-01-01
Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.
NASA Astrophysics Data System (ADS)
Jacobs, Alan M.; Cox, John D.; Juang, Yi-Shung
1987-01-01
A solid-state digital x-ray detector is described which can replace high resolution film in industrial radiography and has potential for application in some medical imaging. Because of the 10 micron pixel pitch on the sensor, contact magnification radiology is possible and is demonstrated. Methods for frame speed increase and integration of sensor to a large format are discussed.
A generic FPGA-based detector readout and real-time image processing board
NASA Astrophysics Data System (ADS)
Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant
2016-07-01
For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.
NASA Astrophysics Data System (ADS)
Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.
2009-07-01
The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.
Process simulation in digital camera system
NASA Astrophysics Data System (ADS)
Toadere, Florin
2012-06-01
The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.
NASA Astrophysics Data System (ADS)
Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith
2017-02-01
The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.
Digital Simulation Of Precise Sensor Degradations Including Non-Linearities And Shift Variance
NASA Astrophysics Data System (ADS)
Kornfeld, Gertrude H.
1987-09-01
Realistic atmospheric and Forward Looking Infrared Radiometer (FLIR) degradations were digitally simulated. Inputs to the routine are environmental observables and the FLIR specifications. It was possible to achieve realism in the thermal domain within acceptable computer time and random access memory (RAM) requirements because a shift variant recursive convolution algorithm that well describes thermal properties was invented and because each picture element (pixel) has radiative temperature, a materials parameter and range and altitude information. The computer generation steps start with the image synthesis of an undegraded scene. Atmospheric and sensor degradation follow. The final result is a realistic representation of an image seen on the display of a specific FLIR.
High Resolution Airborne Digital Imagery for Precision Agriculture
NASA Technical Reports Server (NTRS)
Herwitz, Stanley R.
1998-01-01
The Environmental Research Aircraft and Sensor Technology (ERAST) program is a NASA initiative that seeks to demonstrate the application of cost-effective aircraft and sensor technology to private commercial ventures. In 1997-98, a series of flight-demonstrations and image acquisition efforts were conducted over the Hawaiian Islands using a remotely-piloted solar- powered platform (Pathfinder) and a fixed-wing piloted aircraft (Navajo) equipped with a Kodak DCS450 CIR (color infrared) digital camera. As an ERAST Science Team Member, I defined a set of flight lines over the largest coffee plantation in Hawaii: the Kauai Coffee Company's 4,000 acre Koloa Estate. Past studies have demonstrated the applications of airborne digital imaging to agricultural management. Few studies have examined the usefulness of high resolution airborne multispectral imagery with 10 cm pixel sizes. The Kodak digital camera integrated with ERAST's Airborne Real Time Imaging System (ARTIS) which generated multiband CCD images consisting of 6 x 106 pixel elements. At the designated flight altitude of 1,000 feet over the coffee plantation, pixel size was 10 cm. The study involved the analysis of imagery acquired on 5 March 1998 for the detection of anomalous reflectance values and for the definition of spectral signatures as indicators of tree vigor and treatment effectiveness (e.g., drip irrigation; fertilizer application).
Circuit design for the retina-like image sensor based on space-variant lens array
NASA Astrophysics Data System (ADS)
Gao, Hongxun; Hao, Qun; Jin, Xuefeng; Cao, Jie; Liu, Yue; Song, Yong; Fan, Fan
2013-12-01
Retina-like image sensor is based on the non-uniformity of the human eyes and the log-polar coordinate theory. It has advantages of high-quality data compression and redundant information elimination. However, retina-like image sensors based on the CMOS craft have drawbacks such as high cost, low sensitivity and signal outputting efficiency and updating inconvenience. Therefore, this paper proposes a retina-like image sensor based on space-variant lens array, focusing on the circuit design to provide circuit support to the whole system. The circuit includes the following parts: (1) A photo-detector array with a lens array to convert optical signals to electrical signals; (2) a strobe circuit for time-gating of the pixels and parallel paths for high-speed transmission of the data; (3) a high-precision digital potentiometer for the I-V conversion, ratio normalization and sensitivity adjustment, a programmable gain amplifier for automatic generation control(AGC), and a A/D converter for the A/D conversion in every path; (4) the digital data is displayed on LCD and stored temporarily in DDR2 SDRAM; (5) a USB port to transfer the data to PC; (6) the whole system is controlled by FPGA. This circuit has advantages as lower cost, larger pixels, updating convenience and higher signal outputting efficiency. Experiments have proved that the grayscale output of every pixel basically matches the target and a non-uniform image of the target is ideally achieved in real time. The circuit can provide adequate technical support to retina-like image sensors based on space-variant lens array.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
Sensors to Support the Soldier
2005-02-01
limited by the need to be man- portable . The Marine infantryman relies on mobility , aggressiveness, and training 2 rather than elaborate equipment to...the ab- sence of conventional GPS guidance, including man- portable inertial inea- surement units, and digital imaging sensors combined with image...cell phones and 802.11 networks shows that walls and floors are not impenentra- ble to wireless signals; it is a question of power, range, and frequency
Differential temperature stress measurement employing array sensor with local offset
NASA Technical Reports Server (NTRS)
Lesniak, Jon R. (Inventor)
1993-01-01
The instrument has a focal plane array of infrared sensors of the integrating type such as a multiplexed device in which a charge is built up on a capacitor which is proportional to the total number of photons which that sensor is exposed to between read-out cycles. The infrared sensors of the array are manufactured as part of an overall array which is part of a micro-electronic device. The sensor achieves greater sensitivity by applying a local offset to the output of each sensor before it is converted into a digital word. The offset which is applied to each sensor will typically be the sensor's average value so that the digital signal which is periodically read from each sensor of the array corresponds to the portion of the signal which is varying in time. With proper synchronization between the cyclical loading of the test object and the frame rate of the infrared array the output of the A/D converted signal will correspond to the stress field induced temperature variations. A digital lock-in operation may be performed on the output of each sensor in the array. This results in a test instrument which can rapidly form a precise image of the thermoelastic stresses in an object.
Biomimetic machine vision system.
Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael
2005-01-01
Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
Avci, Oguzhan; Lortlar Ünlü, Nese; Yalçın Özkumur, Ayça; Ünlü, M. Selim
2015-01-01
Over the last decade, the growing need in disease diagnostics has stimulated rapid development of new technologies with unprecedented capabilities. Recent emerging infectious diseases and epidemics have revealed the shortcomings of existing diagnostics tools, and the necessity for further improvements. Optical biosensors can lay the foundations for future generation diagnostics by providing means to detect biomarkers in a highly sensitive, specific, quantitative and multiplexed fashion. Here, we review an optical sensing technology, Interferometric Reflectance Imaging Sensor (IRIS), and the relevant features of this multifunctional platform for quantitative, label-free and dynamic detection. We discuss two distinct modalities for IRIS: (i) low-magnification (ensemble biomolecular mass measurements) and (ii) high-magnification (digital detection of individual nanoparticles) along with their applications, including label-free detection of multiplexed protein chips, measurement of single nucleotide polymorphism, quantification of transcription factor DNA binding, and high sensitivity digital sensing and characterization of nanoparticles and viruses. PMID:26205273
Film cameras or digital sensors? The challenge ahead for aerial imaging
Light, D.L.
1996-01-01
Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.
Superconducting Digital Multiplexers for Sensor Arrays
NASA Technical Reports Server (NTRS)
Kadin, Alan M.; Brock, Darren K.; Gupta, Deepnarayan
2004-01-01
Arrays of cryogenic microbolometers and other cryogenic detectors are being developed for infrared imaging. If the signal from each sensor is amplified, multiplexed, and digitized using superconducting electronics, then this data can be efficiently read out to ambient temperature with a minimum of noise and thermal load. HYPRES is developing an integrated system based on SQUID amplifiers, a high-resolution analog-to-digital converter (ADC) based on RSFQ (rapid single flux quantum) logic, and a clocked RSFQ multiplexer. The ADC and SQUIDs have already been demonstrated for other projects, so this paper will focus on new results of a digital multiplexer. Several test circuits have been fabricated using Nb Josephson technology and are about to be tested at T = 4.2 K, with a more complete prototype in preparation.
Advances in biologically inspired on/near sensor processing
NASA Astrophysics Data System (ADS)
McCarley, Paul L.
1999-07-01
As electro-optic sensors increase in size and frame rate, the data transfer and digital processing resource requirements also increase. In many missions, the spatial area of interest is but a small fraction of the available field of view. Choosing the right region of interest, however, is a challenge and still requires an enormous amount of downstream digital processing resources. In order to filter this ever-increasing amount of data, we look at how nature solves the problem. The Advanced Guidance Division of the Munitions Directorate, Air Force Research Laboratory at Elgin AFB, Florida, has been pursuing research in the are of advanced sensor and image processing concepts based on biologically inspired sensory information processing. A summary of two 'neuromorphic' processing efforts will be presented along with a seeker system concept utilizing this innovative technology. The Neuroseek program is developing a 256 X 256 2-color dual band IRFPA coupled to an optimized silicon CMOS read-out and processing integrated circuit that provides simultaneous full-frame imaging in MWIR/LWIR wavebands along with built-in biologically inspired sensor image processing functions. Concepts and requirements for future such efforts will also be discussed.
Chander, G.; Markham, B.L.; Helder, D.L.
2009-01-01
This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of-Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost.
NASA Technical Reports Server (NTRS)
Chander, Gyanesh; Markham, Brian L.; Helder, Dennis L.
2009-01-01
This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of- Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost.
Measurement of beam profiles by terahertz sensor card with cholesteric liquid crystals.
Tadokoro, Yuzuru; Nishikawa, Tomohiro; Kang, Boyoung; Takano, Keisuke; Hangyo, Masanori; Nakajima, Makoto
2015-10-01
We demonstrate a sensor card with cholesteric liquid crystals (CLCs) for terahertz (THz) waves generated from a nonlinear crystal pumped by a table-top laser. A beam profile of the THz waves is successfully visualized as color change by the sensor card without additional electronic devices, power supplies, and connecting cables. Above the power density of 4.3 mW/cm2, the approximate beam diameter of the THz waves is measured using the hue image that is digitalized from the picture of the sensor card. The sensor card is low in cost, portable, and suitable for various situations such as THz imaging and alignment of THz systems.
NASA Astrophysics Data System (ADS)
Holland, S. Douglas
1992-09-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor)
1992-01-01
A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.
Smart CMOS image sensor for lightning detection and imaging.
Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor
2013-03-01
We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system
NASA Technical Reports Server (NTRS)
Slater, P. N.; Palmer, J. M. (Principal Investigator)
1984-01-01
The reduction of the data measured on July 8, 1984 at White Sands, New Mexico is summarized. The radiance incident at the entrance pupil of the LANDSAT 5 sensors have been computed for bands 1 to 4. When these are compared to the digital counts of the TM image, the ground based calibration for this sensor will be given. The image was received from Goddard SFC and is presently being analyzed.
High responsivity CMOS imager pixel implemented in SOI technology
NASA Technical Reports Server (NTRS)
Zheng, X.; Wrigley, C.; Yang, G.; Pain, B.
2000-01-01
Availability of mature sub-micron CMOS technology and the advent of the new low noise active pixel sensor (APS) concept have enabled the development of low power, miniature, single-chip, CMOS digital imagers in the decade of the 1990's.
Wang, Tiantian; Kim, Sanghyo; An, Jeong Ho
2017-02-01
Loop-mediated isothermal amplification (LAMP) is considered as one of the alternatives to the conventional PCR and it is an inexpensive portable diagnostic system with minimal power consumption. The present work describes the application of LAMP in real-time photon detection and quantitative analysis of nucleic acids integrated with a disposable complementary-metal-oxide semiconductor (CMOS) image sensor. This novel system works as an amplification-coupled detection platform, relying on a CMOS image sensor, with the aid of a computerized circuitry controller for the temperature and light sources. The CMOS image sensor captures the light which is passing through the sensor surface and converts into digital units using an analog-to-digital converter (ADC). This new system monitors the real-time photon variation, caused by the color changes during amplification. Escherichia coli O157 was used as a proof-of-concept target for quantitative analysis, and compared with the results for Staphylococcus aureus and Salmonella enterica to confirm the efficiency of the system. The system detected various DNA concentrations of E. coli O157 in a short time (45min), with a detection limit of 10fg/μL. The low-cost, simple, and compact design, with low power consumption, represents a significant advance in the development of a portable, sensitive, user-friendly, real-time, and quantitative analytic tools for point-of-care diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Increasing the UAV data value by an OBIA methodology
NASA Astrophysics Data System (ADS)
García-Pedrero, Angel; Lillo-Saavedra, Mario; Rodriguez-Esparragon, Dionisio; Rodriguez-Gonzalez, Alejandro; Gonzalo-Martin, Consuelo
2017-10-01
Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.
Interactive digital image manipulation system
NASA Technical Reports Server (NTRS)
Henze, J.; Dezur, R.
1975-01-01
The system is designed for manipulation, analysis, interpretation, and processing of a wide variety of image data. LANDSAT (ERTS) and other data in digital form can be input directly into the system. Photographic prints and transparencies are first converted to digital form with an on-line high-resolution microdensitometer. The system is implemented on a Hewlett-Packard 3000 computer with 128 K bytes of core memory and a 47.5 megabyte disk. It includes a true color display monitor, with processing memories, graphics overlays, and a movable cursor. Image data formats are flexible so that there is no restriction to a given set of remote sensors. Conversion between data types is available to provide a basis for comparison of the various data. Multispectral data is fully supported, and there is no restriction on the number of dimensions. In this way multispectral data collected at more than one point in time may simply be treated as a data collected with twice (three times, etc.) the number of sensors. There are various libraries of functions available to the user: processing functions, display functions, system functions, and earth resources applications functions.
A digital ISO expansion technique for digital cameras
NASA Astrophysics Data System (ADS)
Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong
2010-01-01
Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Linne von Berg, Dale; Davidson, Morgan; Holt, Niel; Kruer, Melvin; Wilson, Michael L.
2010-04-01
EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS). EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared (LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50 lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes of operation, and results from recent flight demonstrations.
NASA Astrophysics Data System (ADS)
Ikhsanti, Mila Izzatul; Bouzida, Rana; Wijaya, Sastra Kusuma; Rohmadi, Muttakin, Imamul; Taruno, Warsito P.
2017-02-01
This research aims to explore the feasibility of capacitance-digital converter and impedance converter for measurement module in electrical capacitance tomography (ECT) system. ECT sensor used was a cylindrical sensor having 8 electrodes. Absolute capacitance measurement system based on Sigma Delta Capacitance-to-Digital-Converter AD7746 has been shown to produce measurement with high resolution. Whereas, capacitance measurement with wide range of frequency is possible using Impedance Converter AD5933. Comparison of measurement accuracy by both AD7746 and AD5933 with reference of LCR meter was evaluated. Biological matters represented in water and oil were treated as object reconstructed into image using linear back projection (LBP) algorithm.
Wehde, M. E.
1995-01-01
The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.
The multifocus plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Lumsdaine, Andrew
2012-01-01
The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.
Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia
2014-01-01
For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210
Comparison of the performance of intraoral X-ray sensors using objective image quality assessment.
Hellén-Halme, Kristina; Johansson, Curt; Nilsson, Mats
2016-05-01
The main aim of this study was to evaluate the performance of 10 individual sensors of the same make, using objective measures of key image quality parameters. A further aim was to compare 8 brands of sensors. Ten new sensors of 8 different models from 6 manufacturers (i.e., 80 sensors) were included in the study. All sensors were exposed in a standardized way using an X-ray tube voltage of 60 kVp and different exposure times. Sensor response, noise, low-contrast resolution, spatial resolution and uniformity were measured. Individual differences between sensors of the same brand were surprisingly large in some cases. There were clear differences in the characteristics of the different brands of sensors. The largest variations were found for individual sensor response for some of the brands studied. Also, noise level and low contrast resolution showed large variations between brands. Sensors, even of the same brand, vary significantly in their quality. It is thus valuable to establish action levels for the acceptance of newly delivered sensors and to use objective image quality control for commissioning purposes and periodic checks to ensure high performance of individual digital sensors. Copyright © 2016 Elsevier Inc. All rights reserved.
Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun
2016-01-01
The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system. PMID:27025907
Li, Yun Bo; Li, Lian Lin; Xu, Bai Bing; Wu, Wei; Wu, Rui Yuan; Wan, Xiang; Cheng, Qiang; Cui, Tie Jun
2016-03-30
The programmable and digital metamaterials or metasurfaces presented recently have huge potentials in designing real-time-controlled electromagnetic devices. Here, we propose the first transmission-type 2-bit programmable coding metasurface for single-sensor and single- frequency imaging in the microwave frequency. Compared with the existing single-sensor imagers composed of active spatial modulators with their units controlled independently, we introduce randomly programmable metasurface to transform the masks of modulators, in which their rows and columns are controlled simultaneously so that the complexity and cost of the imaging system can be reduced drastically. Different from the single-sensor approach using the frequency agility, the proposed imaging system makes use of variable modulators under single frequency, which can avoid the object dispersion. In order to realize the transmission-type 2-bit programmable metasurface, we propose a two-layer binary coding unit, which is convenient for changing the voltages in rows and columns to switch the diodes in the top and bottom layers, respectively. In our imaging measurements, we generate the random codes by computer to achieve different transmission patterns, which can support enough multiple modes to solve the inverse-scattering problem in the single-sensor imaging. Simple experimental results are presented in the microwave frequency, validating our new single-sensor and single-frequency imaging system.
An automated digital imaging system for environmental monitoring applications
Bogle, Rian; Velasco, Miguel; Vogel, John
2013-01-01
Recent improvements in the affordability and availability of high-resolution digital cameras, data loggers, embedded computers, and radio/cellular modems have advanced the development of sophisticated automated systems for remote imaging. Researchers have successfully placed and operated automated digital cameras in remote locations and in extremes of temperature and humidity, ranging from the islands of the South Pacific to the Mojave Desert and the Grand Canyon. With the integration of environmental sensors, these automated systems are able to respond to local conditions and modify their imaging regimes as needed. In this report we describe in detail the design of one type of automated imaging system developed by our group. It is easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.
NASA Astrophysics Data System (ADS)
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
Bayer Demosaicking with Polynomial Interpolation.
Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil
2016-08-30
Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.
Direct digital conversion detector technology
NASA Astrophysics Data System (ADS)
Mandl, William J.; Fedors, Richard
1995-06-01
Future imaging sensors for the aerospace and commercial video markets will depend on low cost, high speed analog-to-digital (A/D) conversion to efficiently process optical detector signals. Current A/D methods place a heavy burden on system resources, increase noise, and limit the throughput. This paper describes a unique method for incorporating A/D conversion right on the focal plane array. This concept is based on Sigma-Delta sampling, and makes optimum use of the active detector real estate. Combined with modern digital signal processors, such devices will significantly increase data rates off the focal plane. Early conversion to digital format will also decrease the signal susceptibility to noise, lowering the communications bit error rate. Computer modeling of this concept is described, along with results from several simulation runs. A potential application for direct digital conversion is also reviewed. Future uses for this technology could range from scientific instruments to remote sensors, telecommunications gear, medical diagnostic tools, and consumer products.
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Pain, Bedabrata (Inventor)
1999-01-01
An analog-to-digital converter for on-chip focal-plane image sensor applications. The analog-to-digital converter utilizes a single charge integrating amplifier in a charge balancing architecture to implement successive approximation analog-to-digital conversion. This design requires minimal chip area and has high speed and low power dissipation for operation in the 2-10 bit range. The invention is particularly well suited to CMOS on-chip applications requiring many analog-to-digital converters, such as column-parallel focal-plane architectures.
Design and fabrication of vertically-integrated CMOS image sensors.
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.
Design and Fabrication of Vertically-Integrated CMOS Image Sensors
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860
Microlens performance limits in sub-2mum pixel CMOS image sensors.
Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B
2010-03-15
CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
Feng, Sheng; Lotz, Thomas; Chase, J Geoffrey; Hann, Christopher E
2010-01-01
Digital Image Elasto Tomography (DIET) is a non-invasive elastographic breast cancer screening technology, based on image-based measurement of surface vibrations induced on a breast by mechanical actuation. Knowledge of frequency response characteristics of a breast prior to imaging is critical to maximize the imaging signal and diagnostic capability of the system. A feasibility analysis for a non-invasive image based modal analysis system is presented that is able to robustly and rapidly identify resonant frequencies in soft tissue. Three images per oscillation cycle are enough to capture the behavior at a given frequency. Thus, a sweep over critical frequency ranges can be performed prior to imaging to determine critical imaging settings of the DIET system to optimize its tumor detection performance.
The precision-processing subsystem for the Earth Resources Technology Satellite.
NASA Technical Reports Server (NTRS)
Chapelle, W. E.; Bybee, J. E.; Bedross, G. M.
1972-01-01
Description of the precision processor, a subsystem in the image-processing system for the Earth Resources Technology Satellite (ERTS). This processor is a special-purpose image-measurement and printing system, designed to process user-selected bulk images to produce 1:1,000,000-scale film outputs and digital image data, presented in a Universal-Transverse-Mercator (UTM) projection. The system will remove geometric and radiometric errors introduced by the ERTS multispectral sensors and by the bulk-processor electron-beam recorder. The geometric transformations required for each input scene are determined by resection computations based on reseau measurements and image comparisons with a special ground-control base contained within the system; the images are then printed and digitized by electronic image-transfer techniques.
NASA Technical Reports Server (NTRS)
Ostroff, A. J.; Romanczyk, K. C.
1973-01-01
One of the most significant problems associated with the development of large orbiting astronomical telescopes is that of maintaining the very precise pointing accuracy required. A proposed solution to this problem utilizes dual-level pointing control. The primary control system maintains the telescope structure attitude stabilized within the field of view to the desired accuracy. In order to demonstrate the feasibility of optically stabilizing the star images to the desired accuracy a regulating system has been designed and evaluated. The control system utilizes a digital star sensor and an optical star image motion compensator, both of which have been developed for this application. These components have been analyzed mathematically, analytical models have been developed, and hardware has been built and tested.
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
Development of a ground signal processor for digital synthetic array radar data
NASA Technical Reports Server (NTRS)
Griffin, C. R.; Estes, J. M.
1981-01-01
A modified APQ-102 sidelooking array radar (SLAR) in a B-57 aircraft test bed is used, with other optical and infrared sensors, in remote sensing of Earth surface features for various users at NASA Johnson Space Center. The video from the radar is normally recorded on photographic film and subsequently processed photographically into high resolution radar images. Using a high speed sampling (digitizing) system, the two receiver channels of cross-and co-polarized video are recorded on wideband magnetic tape along with radar and platform parameters. These data are subsequently reformatted and processed into digital synthetic aperture radar images with the image data available on magnetic tape for subsequent analysis by investigators. The system design and results obtained are described.
2010-11-01
3-10 Multiple Images of an Image Sequence Figure 3-10 A Digital Magnetic Compass from KVH Industries 3-11 Figure 3-11 Earth’s Magnetic Field 3-11...ARINO SENER – Ingenieria y Sistemas S.A Aerospace Division Parque Tecnologico de Madrid Calle Severo Ocho 4 28760 Tres Cantos Madrid Email...experts from government, academia, industry and the military produced an analysis of future navigation sensors and systems whose performance
Image Processing Occupancy Sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less
Multi sensor satellite imagers for commercial remote sensing
NASA Astrophysics Data System (ADS)
Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.
2005-10-01
This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.
Time-of-flight camera via a single-pixel correlation image sensor
NASA Astrophysics Data System (ADS)
Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua
2018-04-01
A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.
Optomechanical System Development of the AWARE Gigapixel Scale Camera
NASA Astrophysics Data System (ADS)
Son, Hui S.
Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.
CMOS Imaging of Pin-Printed Xerogel-Based Luminescent Sensor Microarrays.
Yao, Lei; Yung, Ka Yi; Khan, Rifat; Chodavarapu, Vamsy P; Bright, Frank V
2010-12-01
We present the design and implementation of a luminescence-based miniaturized multisensor system using pin-printed xerogel materials which act as host media for chemical recognition elements. We developed a CMOS imager integrated circuit (IC) to image the luminescence response of the xerogel-based sensor array. The imager IC uses a 26 × 20 (520 elements) array of active pixel sensors and each active pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. The imager includes a correlated double sampling circuit and pixel address/digital control circuit; the image data is read-out as coded serial signal. The sensor system uses a light-emitting diode (LED) to excite the target analyte responsive luminophores doped within discrete xerogel-based sensor elements. As a prototype, we developed a 4 × 4 (16 elements) array of oxygen (O 2 ) sensors. Each group of 4 sensor elements in the array (arranged in a row) is designed to provide a different and specific sensitivity to the target gaseous O 2 concentration. This property of multiple sensitivities is achieved by using a strategic mix of two oxygen sensitive luminophores ([Ru(dpp) 3 ] 2+ and ([Ru(bpy) 3 ] 2+ ) in each pin-printed xerogel sensor element. The CMOS imager consumes an average power of 8 mW operating at 1 kHz sampling frequency driven at 5 V. The developed prototype system demonstrates a low cost and miniaturized luminescence multisensor system.
A survey of current solid state star tracker technology
NASA Astrophysics Data System (ADS)
Armstrong, R. W.; Staley, D. A.
1985-12-01
This paper is a survey of the current state of the art in design of star trackers for spacecraft attitude determination systems. Specific areas discussed are sensor technology, including the current state-of-the-art solid state sensors and techniques of mounting and cooling the sensor, analog image preprocessing electronics performance, and digital processing hardware and software. Three examples of area array solid state star tracker development are presented - ASTROS, developed by the Jet Propulsion Laboratory, the Retroreflector Field Tracker (RFT) by Ball Aerospace, and TRW's MADAN. Finally, a discussion of solid state line arrays explores the possibilities for one-dimensional imagers which offer simplified scan control electronics.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Development of near surface seismic methods for urban and mining applications
NASA Astrophysics Data System (ADS)
Malehmir, Alireza; Brodic, Bojan; Place, Joachim; Juhlin, Christopher; Bastani, Mehrdad
2014-05-01
There is a great need to improve our understanding of the geological conditions in the shallow subsurface. Direct observations of the subsurface are cumbersome and expensive, and sometimes impossible. Urban and mining areas are especially challenging due to various sources of noise such as from traffic, buildings, cars, city trains, trams, bridges and high-voltage power-lines. Access is also restricted both in time and space, which requires the equipment to be versatile, fast to set up and pack, and produces the least disruptions. However, if properly designed and implemented, geophysical methods are capable of imaging detailed subsurface structures and can successfully be used to provide crucial information for site characterizations, infrastructure planning, brown- and near-field exploration, and mine planning. To address some of these issues Uppsala University, in collaboration with a number of public authorities, research organizations and industry partners, has recently developed a prototype broadband (0-800 Hz based on digital sensors) multi-component seismic landstreamer system. The current configuration consists of three segments with twenty 3C-sensors each 2 m apart and an additional segment with twenty 3C-sensors each 4 m apart, giving a total streamer length of 200 m. These four segments can be towed in parallel or in series, which in combination with synchronized wireless and cabled sensors can address a variety of complex near surface problems. The system is especially geared for noisy environments and areas where high-resolution images of the subsurface are needed. The system has little sensitivity to electrical noise and measures sensor tilt, important in rough terrains, so it can immediately be corrected for during the acquisition. Thanks to the digital sensors, the system can also be used for waveform tomography and multi-channel analysis of surface waves (MASW). Both these methods require low frequencies and these are often sacrificed in traditional landstreamers, which are either constructed for refraction and MASW methods or for reflection seismic imaging purposes. The landstreamer system, assembled in 2013, has so far been tested against planted 10- and 28-Hz coil-based sensors. Two preliminary surveys were performed in 2013, one for imaging the shallow (< 50 m) crystalline basement that controls mineralization at a location in northern Sweden and another one for site characterization at a planned access tunnel in the city of Stockholm. The comparison test showed that the digital sensors on the streamer provided superior results (in terms of resolution and sensitivity to noise) than the planted geophones, suggesting that digital sensors are more suitable for urban and mining applications. In the Stockholm survey, the system was coupled to twelve 3C-digital wireless sensors to cover areas where the access was restricted due to road traffic and existing city infrastructures. The wireless sensors were used to collect data in a passive mode during the survey; these data were later harvested and merged with the active data using GPS time stamps (nanoseconds accuracy). The system thus also allows a combination of active and passive seismic data acquisition if required. Preliminary results for the mining application show successful imaging of the shallow crystalline basement as a high-velocity (> 4000 m/s) media. Clear refracted and reflected arrivals are present in raw shot gathers. Future efforts will be geared to (i) exploiting information recorded on the horizontal components in order to extract rock mechanic parameters and anisotropy information, (ii) the development and application of a multi-component source to complement the landstreamer system, and (iii) a number of tests at underground sites. Acknowledgments: Formas, SGU, BeFo, SBUF, Skanska, Boliden, FQM and NGI
Maryam, Ehsani; Farida, Abesi; Farhad, Akbarzade; Soraya, Khafri
2013-11-01
Obtaining the proper working length in endodontic treatment is essential. The aim of this study was to compare the working length (WL) assessment of small diameter K-files using the two different digital imaging methods. The samples for this in-vitro experimental study consisted of 40 extracted single-rooted premolars. After access cavity preparation, the ISO files no. 6, 8, and 10 stainless steel K-files were inserted in the canals in the three different lengths to evaluate the results in a blinded manner: At the level of apical foramen(actual)1 mm short of apical foramen2 mm short of apical foramen A digital caliper was used to measure the length of the files which was considered as the Gold Standard. Five observers (two oral and maxillofacial radiologists and three endodontists) observed the digital radiographs which were obtained using PSP and CCD digital imaging sensors. The collected data were analyzed by SPSS 17 and Repeated Measures Paired T-test. In WL assessment of small diameter K-files, a significant statistical relationship was seen among the observers of two digital imaging techniques (P<0.001). However, no significant difference was observed between the two digital techniques in WL assessment of small diameter K-files (P<0.05). PSP and CCD digital imaging techniques were similar in WL assessment of canals using no. 6, 8, and 10 K-files.
CMOS Imaging of Temperature Effects on Pin-Printed Xerogel Sensor Microarrays.
Lei Yao; Ka Yi Yung; Chodavarapu, Vamsy P; Bright, Frank V
2011-04-01
In this paper, we study the effect of temperature on the operation and performance of a xerogel-based sensor microarrays coupled to a complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC) that images the photoluminescence response from the sensor microarray. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. A correlated double sampling circuit and pixel address/digital control/signal integration circuit are also implemented on-chip. The CMOS imager data are read out as a serial coded signal. The sensor system uses a light-emitting diode to excite target analyte responsive organometallic luminophores doped within discrete xerogel-based sensor elements. As a proto type, we developed a 3 × 3 (9 elements) array of oxygen (O2) sensors. Each group of three sensor elements in the array (arranged in a column) is designed to provide a different and specific sensitivity to the target gaseous O2 concentration. This property of multiple sensitivities is achieved by using a mix of two O2 sensitive luminophores in each pin-printed xerogel sensor element. The CMOS imager is designed to be low noise and consumes a static power of 320.4 μW and an average dynamic power of 624.6 μW when operating at 100-Hz sampling frequency and 1.8-V dc power supply.
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling
NASA Astrophysics Data System (ADS)
Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.
2016-04-01
Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured with the device's digital camera, and an interface is available for annotating (interpreting) the image using lines and polygons. Image-to-geometry registration is then performed using a developed algorithm, initialised using the coarse pose from the on-board orientation and positioning sensors. The annotations made on the captured images are then available in the 3D model coordinate system for overlay and export. This workflow allows geologists to make interpretations and conceptual models in the field, which can then be linked to and refined in office workflows for later MPS property modelling.
NASA Technical Reports Server (NTRS)
Lei, Ning; Xiong, Xiaoxiong
2016-01-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (SNPP) satellite is a passive scanning radiometer and an imager, observing radiative energy from the Earth in 22 spectral bands from 0.41 to 12 microns which include 14 reflective solar bands (RSBs). Extending the formula used by the Moderate Resolution Imaging Spectroradiometer instruments, currently the VIIRS determines the sensor aperture spectral radiance through a quadratic polynomial of its detector digital count. It has been known that for the RSBs the quadratic polynomial is not adequate in the design specified spectral radiance region and using a quadratic polynomial could drastically increase the errors in the polynomial coefficients, leading to possible large errors in the determined aperture spectral radiance. In addition, it is very desirable to be able to extend the radiance calculation formula to correctly retrieve the aperture spectral radiance with the level beyond the design specified range. In order to more accurately determine the aperture spectral radiance from the observed digital count, we examine a few polynomials of the detector digital count to calculate the sensor aperture spectral radiance.
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
Imaging Radar Applications in the Death Valley Region
NASA Technical Reports Server (NTRS)
Farr, Tom G.
1996-01-01
Death Valley has had a long history as a testbed for remote sensing techniques (Gillespie, this conference). Along with visible-near infrared and thermal IR sensors, imaging radars have flown and orbited over the valley since the 1970's, yielding new insights into the geologic applications of that technology. More recently, radar interferometry has been used to derive digital topographic maps of the area, supplementing the USGS 7.5' digital quadrangles currently available for nearly the entire area. As for their shorter-wavelength brethren, imaging radars were tested early in their civilian history in Death Valley because it has a variety of surface types in a small area without the confounding effects of vegetation. In one of the classic references of these early radar studies, in a semi-quantitative way the response of an imaging radar to surface roughness near the radar wavelength, which typically ranges from about 1 cm to 1 m was explained. This laid the groundwork for applications of airborne and spaceborne radars to geologic problems in and regions. Radar's main advantages over other sensors stems from its active nature- supplying its own illumination makes it independent of solar illumination and it can also control the imaging geometry more accurately. Finally, its long wavelength allows it to peer through clouds, eliminating some of the problems of optical sensors, especially in perennially cloudy and polar areas.
Binary CMOS image sensor with a gate/body-tied MOSFET-type photodetector for high-speed operation
NASA Astrophysics Data System (ADS)
Choi, Byoung-Soo; Jo, Sung-Hyun; Bae, Myunghan; Kim, Sang-Hwan; Shin, Jang-Kyoo
2016-05-01
In this paper, a binary complementary metal oxide semiconductor (CMOS) image sensor with a gate/body-tied (GBT) metal oxide semiconductor field effect transistor (MOSFET)-type photodetector is presented. The sensitivity of the GBT MOSFET-type photodetector, which was fabricated using the standard CMOS 0.35-μm process, is higher than the sensitivity of the p-n junction photodiode, because the output signal of the photodetector is amplified by the MOSFET. A binary image sensor becomes more efficient when using this photodetector. Lower power consumptions and higher speeds of operation are possible, compared to the conventional image sensors using multi-bit analog to digital converters (ADCs). The frame rate of the proposed image sensor is over 2000 frames per second, which is higher than those of the conventional CMOS image sensors. The output signal of an active pixel sensor is applied to a comparator and compared with a reference level. The 1-bit output data of the binary process is determined by this level. To obtain a video signal, the 1-bit output data is stored in the memory and is read out by horizontal scanning. The proposed chip is composed of a GBT pixel array (144 × 100), binary-process circuit, vertical scanner, horizontal scanner, and readout circuit. The operation mode can be selected from between binary mode and multi-bit mode.
Imaging Radar in the Mojave Desert-Death Valley Region
NASA Technical Reports Server (NTRS)
Farr, Tom G.
2001-01-01
The Mojave Desert-Death Valley region has had a long history as a test bed for remote sensing techniques. Along with visible-near infrared and thermal IR sensors, imaging radars have flown and orbited over the area since the 1970's, yielding new insights into the geologic applications of these technologies. More recently, radar interferometry has been used to derive digital topographic maps of the area, supplementing the USGS 7.5' digital quadrangles currently available for nearly the entire area. As for their shorter-wavelength brethren, imaging radars were tested early in their civilian history in the Mojave Desert-Death Valley region because it contains a variety of surface types in a small area without the confounding effects of vegetation. The earliest imaging radars to be flown over the region included military tests of short-wavelength (3 cm) X-band sensors. Later, the Jet Propulsion Laboratory began its development of imaging radars with an airborne sensor, followed by the Seasat orbital radar in 1978. These systems were L-band (25 cm). Following Seasat, JPL embarked upon a series of Space Shuttle Imaging Radars: SIRA (1981), SIR-B (1984), and SIR-C (1994). The most recent in the series was the most capable radar sensor flown in space and acquired large numbers of data swaths in a variety of test areas around the world. The Mojave Desert-Death Valley region was one of those test areas, and was covered very well with 3 wavelengths, multiple polarizations, and at multiple angles. At the same time, the JPL aircraft radar program continued improving and collecting data over the Mojave Desert Death Valley region. Now called AIRSAR, the system includes 3 bands (P-band, 67 cm; L-band, 25 cm; C-band, 5 cm). Each band can collect all possible polarizations in a mode called polarimetry. In addition, AIRSAR can be operated in the TOPSAR mode wherein 2 antennas collect data interferometrically, yielding a digital elevation model (DEM). Both L-band and C-band can be operated in this way, with horizontal resolution of about 5 m and vertical errors less than 2 m. The findings and developments of these earlier investigations are discussed.
Uav Borne Low Altitude Photogrammetry System
NASA Astrophysics Data System (ADS)
Lin, Z.; Su, G.; Xie, F.
2012-07-01
In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.
Architecture and applications of a high resolution gated SPAD image sensor
Burri, Samuel; Maruyama, Yuki; Michalet, Xavier; Regazzoni, Francesco; Bruschini, Claudio; Charbon, Edoardo
2014-01-01
We present the architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date. The sensor, fabricated in a high-voltage CMOS process, has a resolution of 512 × 128 pixels and a pitch of 24 μm. The fill-factor of 5% can be increased to 30% with the use of microlenses. For precise control of the exposure and for time-resolved imaging, we use fast global gating signals to define exposure windows as small as 4 ns. The uniformity of the gate edges location is ∼140 ps (FWHM) over the whole array, while in-pixel digital counting enables frame rates as high as 156 kfps. Currently, our camera is used as a highly sensitive sensor with high temporal resolution, for applications ranging from fluorescence lifetime measurements to fluorescence correlation spectroscopy and generation of true random numbers. PMID:25090572
NASA Technical Reports Server (NTRS)
Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator)
1975-01-01
The author has identified the following significant results. It was found that the high speed man machine interaction capability is a distinct advantage of the image 100; however, the small size of the digital computer in the system is a definite limitation. The system can be highly useful in an analysis mode in which it complements a large general purpose computer. The image 100 was found to be extremely valuable in the analysis of aircraft MSS data where the spatial resolution begins to approach photographic quality and the analyst can exercise interpretation judgements and readily interact with the machine.
Computational imaging of sperm locomotion.
Daloglu, Mustafa Ugur; Ozcan, Aydogan
2017-08-01
Not only essential for scientific research, but also in the analysis of male fertility and for animal husbandry, sperm tracking and characterization techniques have been greatly benefiting from computational imaging. Digital image sensors, in combination with optical microscopy tools and powerful computers, have enabled the use of advanced detection and tracking algorithms that automatically map sperm trajectories and calculate various motility parameters across large data sets. Computational techniques are driving the field even further, facilitating the development of unconventional sperm imaging and tracking methods that do not rely on standard optical microscopes and objective lenses, which limit the field of view and volume of the semen sample that can be imaged. As an example, a holographic on-chip sperm imaging platform, only composed of a light-emitting diode and an opto-electronic image sensor, has emerged as a high-throughput, low-cost and portable alternative to lens-based traditional sperm imaging and tracking methods. In this approach, the sample is placed very close to the image sensor chip, which captures lensfree holograms generated by the interference of the background illumination with the light scattered from sperm cells. These holographic patterns are then digitally processed to extract both the amplitude and phase information of the spermatozoa, effectively replacing the microscope objective lens with computation. This platform has further enabled high-throughput 3D imaging of spermatozoa with submicron 3D positioning accuracy in large sample volumes, revealing various rare locomotion patterns. We believe that computational chip-scale sperm imaging and 3D tracking techniques will find numerous opportunities in both sperm related research and commercial applications. © The Authors 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Di, K.; Liu, Y.; Liu, B.; Peng, M.
2012-07-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.
Brüllmann, D D; d'Hoedt, B
2011-05-01
The aim of this study was to illustrate the influence of digital filters on the signal-to-noise ratio (SNR) and modulation transfer function (MTF) of digital images. The article will address image pre-processing that may be beneficial for the production of clinically useful digital radiographs with lower radiation dose. Three filters, an arithmetic mean filter, a median filter and a Gaussian filter (standard deviation (SD) = 0.4), with kernel sizes of 3 × 3 pixels and 5 × 5 pixels were tested. Synthetic images with exactly increasing amounts of Gaussian noise were created to gather linear regression of SNR before and after application of digital filters. Artificial stripe patterns with defined amounts of line pairs per millimetre were used to calculate MTF before and after the application of the digital filters. The Gaussian filter with a 5 × 5 kernel size caused the highest noise suppression (SNR increased from 2.22, measured in the synthetic image, to 11.31 in the filtered image). The smallest noise reduction was found with the 3 × 3 median filter. The application of the median filters resulted in no changes in MTF at the different resolutions but did result in the deletion of smaller structures. The 5 × 5 Gaussian filter and the 5 × 5 arithmetic mean filter showed the strongest changes of MTF. The application of digital filters can improve the SNR of a digital sensor; however, MTF can be adversely affected. As such, imaging systems should not be judged solely on their quoted spatial resolutions because pre-processing may influence image quality.
Digital radiography and caries diagnosis.
Wenzel, A
1998-01-01
Direct digital acquisition of intra-oral radiographs has been possible only in the last decade. Several studies have shown that, theoretically, there are a number of advantages of direct digital radiography compared with conventional film. Laboratory as well as controlled clinical studies are needed to determine whether new digital imaging systems alter diagnosis, treatment and prognosis compared with conventional methods. Most studies so far have evaluated their diagnostic performance only in laboratory settings. This review concentrates on what evidence we have for the diagnostic efficacy of digital systems for caries detection. Digital systems are compared with film and those studies which have evaluated the effects on diagnostic accuracy of contrast and edge enhancement, image size, variations in radiation dose and image compression are reviewed together with the use of automated image analysis for caries diagnosis. Digital intra-oral radiographic systems seem to be as accurate as the currently available dental films for the detection of caries. Sensitivities are relatively high (0.6-0.8) for detection of occlusal lesions into dentine with false positive fractions of 5-10%. A radiolucency in dentine is recognised as a good predictor for demineralisation. Radiography is of no value for the detection of initial (enamel) occlusal lesions. For detection of approximal dentinal lesions, sensitivities, specificities as well as the predictive values are fair, but are very poor for lesions known to be confined to enamel. Very little documented information exists, however, on the utilization of digital systems in the clinic. It is not known whether dose is actually reduced with the storage phosphor system, or whether collimator size is adjusted to fit sensor size in the CCD-based systems. There is no evidence that the number of retakes have been reduced. It is not known how many images are needed with the various CCD systems when compared with a conventional bitewing, nor how stable these systems are in the daily clinical use or whether proper cross-infection control can be maintained in relation to scanning the storage phosphor plates and the sensors and the cable. There is only sparse evidence that the enhancement facilities are used when interpreting images, and none that this has changed working practices or treatment decisions. The economic consequences for the patient, dentist and society require examination.
NASA Astrophysics Data System (ADS)
Baur, Jeffery W.; Slinker, Keith; Kondash, Corey
2017-04-01
Understanding the shear strain, viscoelastic response, and onset of damage within bonded composites is critical to their design, processing, and reliability. This presentation will discuss the multidisciplinary research conducted which led to the conception, development, and demonstration of two methods for measuring the shear within a bonded joint - dualplane digital image correlation (DIC) and a micro-cantilever shear sensor. The dual plane DIC method was developed to measure the strain field on opposing sides of a transparent single-lap joint in order to spatially quantify the joint shear strain. The sensor consists of a single glass fiber cantilever beam with a radially-grown forest of carbon nanotubes (CNTs) within a capillary pore. When the fiber is deflected, the internal radial CNT array is compressed against an electrode within the pore and the corresponding decrease in electrical resistance is correlated with the external loading. When this small, simple, and low-cost sensor was integrated within a composite bonded joint and cycled in tension, the onset of damage prior to joint failure was observed. In a second sample configuration, both the dual plane DIC and the hair sensor detected viscoplastic changes in the strain of the sample in response to continued loading.
Low-voltage 96 dB snapshot CMOS image sensor with 4.5 nW power dissipation per pixel.
Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander
2012-01-01
Modern "smart" CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage "smart" image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel.
Low-Voltage 96 dB Snapshot CMOS Image Sensor with 4.5 nW Power Dissipation per Pixel
Spivak, Arthur; Teman, Adam; Belenky, Alexander; Yadid-Pecht, Orly; Fish, Alexander
2012-01-01
Modern “smart” CMOS sensors have penetrated into various applications, such as surveillance systems, bio-medical applications, digital cameras, cellular phones and many others. Reducing the power of these sensors continuously challenges designers. In this paper, a low power global shutter CMOS image sensor with Wide Dynamic Range (WDR) ability is presented. This sensor features several power reduction techniques, including a dual voltage supply, a selective power down, transistors with different threshold voltages, a non-rationed logic, and a low voltage static memory. A combination of all these approaches has enabled the design of the low voltage “smart” image sensor, which is capable of reaching a remarkable dynamic range, while consuming very low power. The proposed power-saving solutions have allowed the maintenance of the standard architecture of the sensor, reducing both the time and the cost of the design. In order to maintain the image quality, a relation between the sensor performance and power has been analyzed and a mathematical model, describing the sensor Signal to Noise Ratio (SNR) and Dynamic Range (DR) as a function of the power supplies, is proposed. The described sensor was implemented in a 0.18 um CMOS process and successfully tested in the laboratory. An SNR of 48 dB and DR of 96 dB were achieved with a power dissipation of 4.5 nW per pixel. PMID:23112588
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
Kafieh, Rahele; Shahamoradi, Mahdi; Hekmatian, Ehsan; Foroohandeh, Mehrdad; Emamidoost, Mostafa
2012-10-01
To carry out in vivo and in vitro comparative pilot study to evaluate the preciseness of a newly proposed digital dental radiography setup. This setup was based on markers placed on an external frame to eliminate the measurement errors due to incorrect geometry in relative positioning of cone, teeth and the sensor. Five patients with previous panoramic images were selected to undergo the proposed periapical digital imaging for in vivo phase. For in vitro phase, 40 extracted teeth were replanted in dry mandibular sockets and periapical digital images were prepared. The standard reference for real scales of the teeth were obtained through extracted teeth measurements for in vitro application and were calculated through panoramic imaging for in vivo phases. The proposed image processing thechnique was applied on periapical digital images to distinguish the incorrect geometry. The recognized error was inversely applied on the image and the modified images were compared to the correct values. The measurement findings after the distortion removal were compared to our gold standards (results of panoramic imaging or measurements from extracted teeth) and showed the accuracy of 96.45% through in vivo examinations and 96.0% through in vitro tests. The proposed distortion removal method is perfectly able to identify the possible inaccurate geometry during image acquisition and is capable of applying the inverse transform to the distorted radiograph to obtain the correctly modified image. This can be really helpful in applications like root canal therapy, implant surgical procedures and digital subtraction radiography, which are essentially dependent on precise measurements.
Kafieh, Rahele; Shahamoradi, Mahdi; Hekmatian, Ehsan; Foroohandeh, Mehrdad; Emamidoost, Mostafa
2012-01-01
To carry out in vivo and in vitro comparative pilot study to evaluate the preciseness of a newly proposed digital dental radiography setup. This setup was based on markers placed on an external frame to eliminate the measurement errors due to incorrect geometry in relative positioning of cone, teeth and the sensor. Five patients with previous panoramic images were selected to undergo the proposed periapical digital imaging for in vivo phase. For in vitro phase, 40 extracted teeth were replanted in dry mandibular sockets and periapical digital images were prepared. The standard reference for real scales of the teeth were obtained through extracted teeth measurements for in vitro application and were calculated through panoramic imaging for in vivo phases. The proposed image processing thechnique was applied on periapical digital images to distinguish the incorrect geometry. The recognized error was inversely applied on the image and the modified images were compared to the correct values. The measurement findings after the distortion removal were compared to our gold standards (results of panoramic imaging or measurements from extracted teeth) and showed the accuracy of 96.45% through in vivo examinations and 96.0% through in vitro tests. The proposed distortion removal method is perfectly able to identify the possible inaccurate geometry during image acquisition and is capable of applying the inverse transform to the distorted radiograph to obtain the correctly modified image. This can be really helpful in applications like root canal therapy, implant surgical procedures and digital subtraction radiography, which are essentially dependent on precise measurements. PMID:23724372
Qu, Bin; Huang, Ying; Wang, Weiyuan; Sharma, Prateek; Kuhls-Gilcrist, Andrew T.; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen
2011-01-01
Use of an extensible array of Electron Multiplying CCDs (EMCCDs) in medical x-ray imager applications was demonstrated for the first time. The large variable electronic-gain (up to 2000) and small pixel size of EMCCDs provide effective suppression of readout noise compared to signal, as well as high resolution, enabling the development of an x-ray detector with far superior performance compared to conventional x-ray image intensifiers and flat panel detectors. We are developing arrays of EMCCDs to overcome their limited field of view (FOV). In this work we report on an array of two EMCCD sensors running simultaneously at a high frame rate and optically focused on a mammogram film showing calcified ducts. The work was conducted on an optical table with a pulsed LED bar used to provide a uniform diffuse light onto the film to simulate x-ray projection images. The system can be selected to run at up to 17.5 frames per second or even higher frame rate with binning. Integration time for the sensors can be adjusted from 1 ms to 1000 ms. Twelve-bit correlated double sampling AD converters were used to digitize the images, which were acquired by a National Instruments dual-channel Camera Link PC board in real time. A user-friendly interface was programmed using LabVIEW to save and display 2K × 1K pixel matrix digital images. The demonstration tiles a 2 × 1 array to acquire increased-FOV stationary images taken at different gains and fluoroscopic-like videos recorded by scanning the mammogram simultaneously with both sensors. The results show high resolution and high dynamic range images stitched together with minimal adjustments needed. The EMCCD array design allows for expansion to an M×N array for arbitrarily larger FOV, yet with high resolution and large dynamic range maintained. PMID:23505330
Propagation phasor approach for holographic image reconstruction
Luo, Wei; Zhang, Yibo; Göröcs, Zoltán; Feizi, Alborz; Ozcan, Aydogan
2016-01-01
To achieve high-resolution and wide field-of-view, digital holographic imaging techniques need to tackle two major challenges: phase recovery and spatial undersampling. Previously, these challenges were separately addressed using phase retrieval and pixel super-resolution algorithms, which utilize the diversity of different imaging parameters. Although existing holographic imaging methods can achieve large space-bandwidth-products by performing pixel super-resolution and phase retrieval sequentially, they require large amounts of data, which might be a limitation in high-speed or cost-effective imaging applications. Here we report a propagation phasor approach, which for the first time combines phase retrieval and pixel super-resolution into a unified mathematical framework and enables the synthesis of new holographic image reconstruction methods with significantly improved data efficiency. In this approach, twin image and spatial aliasing signals, along with other digital artifacts, are interpreted as noise terms that are modulated by phasors that analytically depend on the lateral displacement between hologram and sensor planes, sample-to-sensor distance, wavelength, and the illumination angle. Compared to previous holographic reconstruction techniques, this new framework results in five- to seven-fold reduced number of raw measurements, while still achieving a competitive resolution and space-bandwidth-product. We also demonstrated the success of this approach by imaging biological specimens including Papanicolaou and blood smears. PMID:26964671
Commercial CMOS image sensors as X-ray imagers and particle beam monitors
NASA Astrophysics Data System (ADS)
Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.
2015-01-01
CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.
Simulation and ground testing with the Advanced Video Guidance Sensor
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Johnston, Albert S.; Bryan, Thomas C.; Book, Michael L.
2005-01-01
The Advanced Video Guidance Sensor (AVGS), an active sensor system that provides near-range 6-degree-of-freedom sensor data, has been developed as part of an automatic rendezvous and docking system for the Demonstration of Autonomous Rendezvous Technology (DART). The sensor determines the relative positions and attitudes between the active sensor and the passive target at ranges up to 300 meters. The AVGS uses laser diodes to illuminate retro-reflectors in the target, a solid-state imager to detect the light returned from the target, and image capture electronics and a digital signal processor to convert the video information into the relative positions and attitudes. The development of the sensor, through initial prototypes, final prototypes, and three flight units, has required a great deal of testing at every phase, and the different types of testing, their effectiveness, and their results, are presented in this paper, focusing on the testing of the flight units. Testing has improved the sensor's performance.
NASA Astrophysics Data System (ADS)
Haemisch, York; Frach, Thomas; Degenhardt, Carsten; Thon, Andreas
Silicon Photomultipliers (SiPMs) have emerged as promising alternative to fast vacuum photomultiplier tubes (PMT). A fully digital implementation of the Silicon Photomultiplier (dSiPM) has been developed in order to overcome the deficiencies and limitations of the so far only analog SiPMs (aSiPMs). Our sensor is based on arrays of single photon avalanche photodiodes (SPADs) integrated in a standard CMOS process. Photons are detected directly by sensing the voltage at the SPAD anode using a dedicated cell electronics block next to each diode. This block also contains active quenching and recharge circuits as well as a one bit memory for the selective inhibit of detector cells. A balanced trigger network is used to propagate the trigger signal from all cells to the integrated time-to-digital converter. In consequence, photons are detected and counted as digital signals, thus making the sensor less susceptible to temperature variations and electronic noise. The integration with CMOS logic provides the added benefit of low power consumption and possible integration of data post-processing directly in the sensor. In this overview paper, we discuss the sensor architecture together with its characteristics with a focus on scalability and practicability aspects for applications in medical imaging, high energy- and astrophysics.
Positional Quality Assessment of Orthophotos Obtained from Sensors Onboard Multi-Rotor UAV Platforms
Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer
2014-01-01
In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart. PMID:25587877
Mesas-Carrascosa, Francisco Javier; Rumbao, Inmaculada Clavero; Berrocal, Juan Alberto Barrera; Porras, Alfonso García-Ferrer
2014-11-26
In this study we explored the positional quality of orthophotos obtained by an unmanned aerial vehicle (UAV). A multi-rotor UAV was used to obtain images using a vertically mounted digital camera. The flight was processed taking into account the photogrammetry workflow: perform the aerial triangulation, generate a digital surface model, orthorectify individual images and finally obtain a mosaic image or final orthophoto. The UAV orthophotos were assessed with various spatial quality tests used by national mapping agencies (NMAs). Results showed that the orthophotos satisfactorily passed the spatial quality tests and are therefore a useful tool for NMAs in their production flowchart.
NASA Technical Reports Server (NTRS)
Wattson, R. B.; Harvey, P.; Swift, R.
1975-01-01
An intrinsic silicon charge injection device (CID) television sensor array has been used in conjunction with a CaMoO4 colinear tunable acousto optic filter, a 61 inch reflector, a sophisticated computer system, and a digital color TV scan converter/computer to produce near IR images of Saturn and Jupiter with 10A spectral resolution and approximately 3 inch spatial resolution. The CID camera has successfully obtained digitized 100 x 100 array images with 5 minutes of exposure time, and slow-scanned readout to a computer. Details of the equipment setup, innovations, problems, experience, data and final equipment performance limits are given.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Autonomous vision networking: miniature wireless sensor networks with imaging technology
NASA Astrophysics Data System (ADS)
Messinger, Gioia; Goldberg, Giora
2006-09-01
The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor. Image processing at the sensor node level may also be required for applications in security, asset management and process control. Due to the data bandwidth requirements posed on the network by video sensors, new networking protocols or video extensions to existing standards (e.g. Zigbee) are required. To this end, Avaak has designed and implemented an ultra-low power networking protocol designed to carry large volumes of data through the network. The low power wireless sensor nodes that will be discussed include a chemical sensor integrated with a CMOS digital camera, a controller, a DSP processor and a radio communication transceiver, which enables relaying of an alarm or image message, to a central station. In addition to the communications, identification is very desirable; hence location awareness will be later incorporated to the system in the form of Time-Of-Arrival triangulation, via wide band signaling. While the wireless imaging kernel already exists specific applications for surveillance and chemical detection are under development by Avaak, as part of a co-founded program from ONR and DARPA. Avaak is also designing vision networks for commercial applications - some of which are undergoing initial field tests.
High Scalability Video ISR Exploitation
2012-10-01
Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown
A data-management system using sensor technology and wireless devices for port security
NASA Astrophysics Data System (ADS)
Saldaña, Manuel; Rivera, Javier; Oyola, Jose; Manian, Vidya
2014-05-01
Sensor technologies such as infrared sensors and hyperspectral imaging, video camera surveillance are proven to be viable in port security. Drawing from sources such as infrared sensor data, digital camera images and processed hyperspectral images, this article explores the implementation of a real-time data delivery system. In an effort to improve the manner in which anomaly detection data is delivered to interested parties in port security, this system explores how a client-server architecture can provide protected access to data, reports, and device status. Sensor data and hyperspectral image data will be kept in a monitored directory, where the system will link it to existing users in the database. Since this system will render processed hyperspectral images that are dynamically added to the server - which often occupy a large amount of space - the resolution of these images is trimmed down to around 1024×768 pixels. Changes that occur in any image or data modification that originates from any sensor will trigger a message to all users that have a relation with the aforementioned. These messages will be sent to the corresponding users through automatic email generation and through a push notification using Google Cloud Messaging for Android. Moreover, this paper presents the complete architecture for data reception from the sensors, processing, storage and discusses how users of this system such as port security personnel can use benefit from the use of this service to receive secure real-time notifications if their designated sensors have detected anomalies and/or have remote access to results from processed hyperspectral imagery relevant to their assigned posts.
Ice Sheet Change Detection by Satellite Image Differencing
NASA Technical Reports Server (NTRS)
Bindschadler, Robert A.; Scambos, Ted A.; Choi, Hyeungu; Haran, Terry M.
2010-01-01
Differencing of digital satellite image pairs highlights subtle changes in near-identical scenes of Earth surfaces. Using the mathematical relationships relevant to photoclinometry, we examine the effectiveness of this method for the study of localized ice sheet surface topography changes using numerical experiments. We then test these results by differencing images of several regions in West Antarctica, including some where changes have previously been identified in altimeter profiles. The technique works well with coregistered images having low noise, high radiometric sensitivity, and near-identical solar illumination geometry. Clouds and frosts detract from resolving surface features. The ETM(plus) sensor on Landsat-7, ALI sensor on EO-1, and MODIS sensor on the Aqua and Terra satellite platforms all have potential for detecting localized topographic changes such as shifting dunes, surface inflation and deflation features associated with sub-glacial lake fill-drain events, or grounding line changes. Availability and frequency of MODIS images favor this sensor for wide application, and using it, we demonstrate both qualitative identification of changes in topography and quantitative mapping of slope and elevation changes.
Reed, Bryan W.; DeHope, William J.; Huete, Glenn; LaGrange, Thomas B.; Shuttlesworth, Richard M.
2016-02-23
An electron microscope is disclosed which has a laser-driven photocathode and an arbitrary waveform generator (AWG) laser system ("laser"). The laser produces a train of temporally-shaped laser pulses each being of a programmable pulse duration, and directs the laser pulses to the laser-driven photocathode to produce a train of electron pulses. An image sensor is used along with a deflector subsystem. The deflector subsystem is arranged downstream of the target but upstream of the image sensor, and has a plurality of plates. A control system having a digital sequencer controls the laser and a plurality of switching components, synchronized with the laser, to independently control excitation of each one of the deflector plates. This allows each electron pulse to be directed to a different portion of the image sensor, as well as to enable programmable pulse durations and programmable inter-pulse spacings.
Contact CMOS imaging of gaseous oxygen sensor array
Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.
2014-01-01
We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909
Contact CMOS imaging of gaseous oxygen sensor array.
Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V
2011-10-01
We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Ken D.; Quinn, Edward L.; Mauck, Jerry L.
The nuclear industry has been slow to incorporate digital sensor technology into nuclear plant designs due to concerns with digital qualification issues. However, the benefits of digital sensor technology for nuclear plant instrumentation are substantial in terms of accuracy and reliability. This paper, which refers to a final report issued in 2013, demonstrates these benefits in direct comparisons of digital and analog sensor applications. Improved accuracy results from the superior operating characteristics of digital sensors. These include improvements in sensor accuracy and drift and other related parameters which reduce total loop uncertainty and thereby increase safety and operating margins. Anmore » example instrument loop uncertainty calculation for a pressure sensor application is presented to illustrate these improvements. This is a side-by-side comparison of the instrument loop uncertainty for both an analog and a digital sensor in the same pressure measurement application. Similarly, improved sensor reliability is illustrated with a sample calculation for determining the probability of failure on demand, an industry standard reliability measure. This looks at equivalent analog and digital temperature sensors to draw the comparison. The results confirm substantial reliability improvement with the digital sensor, due in large part to ability to continuously monitor the health of a digital sensor such that problems can be immediately identified and corrected. This greatly reduces the likelihood of a latent failure condition of the sensor at the time of a design basis event. Notwithstanding the benefits of digital sensors, there are certain qualification issues that are inherent with digital technology and these are described in the report. One major qualification impediment for digital sensor implementation is software common cause failure (SCCF).« less
Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.
2017-05-01
Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.
Characterization techniques for incorporating backgrounds into DIRSIG
NASA Astrophysics Data System (ADS)
Brown, Scott D.; Schott, John R.
2000-07-01
The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.
High-speed particle tracking in microscopy using SPAD image sensors
NASA Astrophysics Data System (ADS)
Gyongy, Istvan; Davies, Amy; Miguelez Crespo, Allende; Green, Andrew; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.
2018-02-01
Single photon avalanche diodes (SPADs) are used in a wide range of applications, from fluorescence lifetime imaging microscopy (FLIM) to time-of-flight (ToF) 3D imaging. SPAD arrays are becoming increasingly established, combining the unique properties of SPADs with widefield camera configurations. Traditionally, the photosensitive area (fill factor) of SPAD arrays has been limited by the in-pixel digital electronics. However, recent designs have demonstrated that by replacing the complex digital pixel logic with simple binary pixels and external frame summation, the fill factor can be increased considerably. A significant advantage of such binary SPAD arrays is the high frame rates offered by the sensors (>100kFPS), which opens up new possibilities for capturing ultra-fast temporal dynamics in, for example, life science cellular imaging. In this work we consider the use of novel binary SPAD arrays in high-speed particle tracking in microscopy. We demonstrate the tracking of fluorescent microspheres undergoing Brownian motion, and in intra-cellular vesicle dynamics, at high frame rates. We thereby show how binary SPAD arrays can offer an important advance in live cell imaging in such fields as intercellular communication, cell trafficking and cell signaling.
NASA Astrophysics Data System (ADS)
Armstrong, Roy A.; Singh, Hanumant
2006-09-01
Optical imaging of coral reefs and other benthic communities present below one attenuation depth, the limit of effective airborne and satellite remote sensing, requires the use of in situ platforms such as autonomous underwater vehicles (AUVs). The Seabed AUV, which was designed for high-resolution underwater optical and acoustic imaging, was used to characterize several deep insular shelf reefs of Puerto Rico and the US Virgin Islands using digital imagery. The digital photo transects obtained by the Seabed AUV provided quantitative data on living coral, sponge, gorgonian, and macroalgal cover as well as coral species richness and diversity. Rugosity, an index of structural complexity, was derived from the pencil-beam acoustic data. The AUV benthic assessments could provide the required information for selecting unique areas of high coral cover, biodiversity and structural complexity for habitat protection and ecosystem-based management. Data from Seabed sensors and related imaging technologies are being used to conduct multi-beam sonar surveys, 3-D image reconstruction from a single camera, photo mosaicking, image based navigation, and multi-sensor fusion of acoustic and optical data.
Research-grade CMOS image sensors for remote sensing applications
NASA Astrophysics Data System (ADS)
Saint-Pe, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Martin-Gonthier, Philippe; Corbiere, Franck; Belliot, Pierre; Estribeau, Magali
2004-11-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding space applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this paper will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments and performances of CIS prototypes built using an imaging CMOS process will be presented in the corresponding section.
Shilemay, Moshe; Rozban, Daniel; Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S; Yadid-Pecht, Orly; Abramovich, Amir
2013-03-01
Inexpensive millimeter-wavelength (MMW) optical digital imaging raises a challenge of evaluating the imaging performance and image quality because of the large electromagnetic wavelengths and pixel sensor sizes, which are 2 to 3 orders of magnitude larger than those of ordinary thermal or visual imaging systems, and also because of the noisiness of the inexpensive glow discharge detectors that compose the focal-plane array. This study quantifies the performances of this MMW imaging system. Its point-spread function and modulation transfer function were investigated. The experimental results and the analysis indicate that the image quality of this MMW imaging system is limited mostly by the noise, and the blur is dominated by the pixel sensor size. Therefore, the MMW image might be improved by oversampling, given that noise reduction is achieved. Demonstration of MMW image improvement through oversampling is presented.
Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning. PMID:23385416
Creation of 3D multi-body orthodontic models by using independent imaging sensors.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-02-05
In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.
The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors
NASA Astrophysics Data System (ADS)
Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.
2015-12-01
Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and the most popular ones in each category were selected (Arc 3D, Visual SfM, Sure, Agisoft). Also four small objects with distinct geometric properties and especial complexities were chosen and their accurate models as reliable true data was created using ATOS Compact Scan 2M 3D scanner. Images were taken using Fujifilm Real 3D stereo camera, Apple iPhone 5 and Nikon D3200 professional camera and three dimensional models of the objects were obtained using each of the software. Finally, a comprehensive comparison between the detailed reviews of the results on the data set showed that the best combination of software and sensors for generating three-dimensional models is directly related to the object shape as well as the expected accuracy of the final model. Generally better quantitative and qualitative results were obtained by using the Nikon D3200 professional camera, while Fujifilm Real 3D stereo camera and Apple iPhone 5 were the second and third respectively in this comparison. On the other hand, three software of Visual SfM, Sure and Agisoft had a hard competition to achieve the most accurate and complete model of the objects and the best software was different according to the geometric properties of the object.
A portfolio of products from the rapid terrain visualization interferometric SAR
NASA Astrophysics Data System (ADS)
Bickel, Douglas L.; Doerry, Armin W.
2007-04-01
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor was built by Sandia National Laboratories for the Joint Programs Sustainment and Development (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieved better than HRTe Level IV position accuracy in near real-time. The system was flown on a deHavilland DHC-7 Army aircraft. This paper presents a collection of images and data products from the Rapid Terrain Visualization interferometric synthetic aperture radar. The imagery includes orthorectified images and DEMs from the RTV interferometric SAR radar.
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
Automatic Topography Using High Precision Digital Moire Methods
NASA Astrophysics Data System (ADS)
Yatagai, T.; Idesawa, M.; Saito, S.
1983-07-01
Three types of moire topographic methods using digital techniques are proposed. Deformed gratings obtained by projecting a reference grating onto an object under test are subjected to digital analysis. The electronic analysis procedures of deformed gratings described here enable us to distinguish between depression and elevation of the object, so that automatic measurement of 3-D shapes and automatic moire fringe interpolation are performed. Based on the digital moire methods, we have developed a practical measurement system, with a linear photodiode array on a micro-stage as a scanning image sensor. Examples of fringe analysis in medical applications are presented.
Adaptation of reference volumes for correlation-based digital holographic particle tracking
NASA Astrophysics Data System (ADS)
Hesseling, Christina; Peinke, Joachim; Gülker, Gerd
2018-04-01
Numerically reconstructed reference volumes tailored to particle images are used for particle position detection by means of three-dimensional correlation. After a first tracking of these positions, the experimentally recorded particle images are retrieved as a posteriori knowledge about the particle images in the system. This knowledge is used for a further refinement of the detected positions. A transparent description of the individual algorithm steps including the results retrieved with experimental data complete the paper. The work employs extraordinarily small particles, smaller than the pixel pitch of the camera sensor. It is the first approach known to the authors that combines numerical knowledge about particle images and particle images retrieved from the experimental system to an iterative particle tracking approach for digital holographic particle tracking velocimetry.
NASA Astrophysics Data System (ADS)
Trolinger, James D.; Dioumaev, Andrei K.; Ziaee, Ali; Minniti, Marco; Dunn-Rankin, Derek
2017-08-01
This paper describes research that demonstrated gated, femtosecond, digital holography, enabling 3D microscopic viewing inside dense, almost opaque sprays, and providing a new and powerful diagnostics capability for viewing fuel atomization processes never seen before. The method works by exploiting the extremely short coherence and pulse length (approximately 30 micrometers in this implementation) provided by a femtosecond laser combined with digital holography to eliminate multiple and wide angle scattered light from particles surrounding the injection region, which normally obscures the image of interest. Photons that follow a path that differs in length by more than 30 micrometers from a straight path through the field to the sensor do not contribute to the holographic recording of photons that travel in a near straight path (ballistic and "snake" photons). To further enhance the method, off-axis digital holography was incorporated to enhance signal to noise ratio and image processing capability in reconstructed images by separating the conjugate images, which overlap and interfere in conventional in-line holography. This also enables digital holographic interferometry. Fundamental relationships and limitations were also examined. The project is a continuing collaboration between MetroLaser and the University of California, Irvine.
Using DSLR cameras in digital holography
NASA Astrophysics Data System (ADS)
Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge
2017-08-01
In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
Smartphone based visual and quantitative assays on upconversional paper sensor.
Mei, Qingsong; Jing, Huarong; Li, You; Yisibashaer, Wuerzha; Chen, Jian; Nan Li, Bing; Zhang, Yong
2016-01-15
The integration of smartphone with paper sensors recently has been gain increasing attentions because of the achievement of quantitative and rapid analysis. However, smartphone based upconversional paper sensors have been restricted by the lack of effective methods to acquire luminescence signals on test paper. Herein, by the virtue of 3D printing technology, we exploited an auxiliary reusable device, which orderly assembled a 980nm mini-laser, optical filter and mini-cavity together, for digitally imaging the luminescence variations on test paper and quantitative analyzing pesticide thiram by smartphone. In detail, copper ions decorated NaYF4:Yb/Tm upconversion nanoparticles were fixed onto filter paper to form test paper, and the blue luminescence on it would be quenched after additions of thiram through luminescence resonance energy transfer mechanism. These variations could be monitored by the smartphone camera, and then the blue channel intensities of obtained colored images were calculated to quantify amounts of thiram through a self-written Android program installed on the smartphone, offering a reliable and accurate detection limit of 0.1μM for the system. This work provides an initial demonstration of integrating upconversion nanosensors with smartphone digital imaging for point-of-care analysis on a paper-based platform. Copyright © 2015 Elsevier B.V. All rights reserved.
X-ray imaging using digital cameras
NASA Astrophysics Data System (ADS)
Winch, Nicola M.; Edgar, Andrew
2012-03-01
The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ted Quinn; Jerry Mauck; Richard Bockhorst
The nuclear industry has been slow to incorporate digital sensor technology into nuclear plant designs due to concerns with digital qualification issues. However, the benefits of digital sensor technology for nuclear plant instrumentation are substantial in terms of accuracy, reliability, availability, and maintainability. This report demonstrates these benefits in direct comparisons of digital and analog sensor applications. It also addresses the qualification issues that must be addressed in the application of digital sensor technology.
Digital dental radiology in Belgium: a nationwide survey.
Snel, Robin; Van De Maele, Ellen; Politis, Constantinus; Jacobs, Reinhilde
2018-06-27
The aim of this study is to analyse the use of digital dental radiology in Belgium, by focussing on the use of extraoral and intraoral radiographic techniques, digitalisation and image communication. A nationwide survey has been performed amongst Belgian general dentists and dental specialists. Questionnaires were distributed digitally via mailings lists and manually on multiple refresher courses and congresses throughout the country. The overall response rate was 30%. Overall, 94% of the respondents had access to an intraoral radiographic unit, 76% had access to a panoramic unit, 21% has an attached cephalometric arm. One in five Belgian dentists also seem to have direct access to a cone beam CT. 90% of all intraoral radiography unit worked with digital detectors, while this was 91% for panoramic units (with or without cephalometrics). In 70% of the cases, general dental practitioners with a digital intraoral unit used a storage phosphor plate while in 30% of the cases they used sensor technology (charge-coupled device or complementary metal-oxide-semiconductor). The most common method for professional image transfer appeared to be email. Finally, 16% of all respondents used a calibrated monitor for image analysis. The survey indicates that 90% of the respondents, Belgian dentists, make use of digital image techniques. For sharing images, general dental practitioners mainly use methods such as printout and e-mail. The usage of calibrated monitors, however, is not well established yet.
Virtual pyramid wavefront sensor for phase unwrapping.
Akondi, Vyas; Vohnsen, Brian; Marcos, Susana
2016-10-10
Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.
Theory on data processing and instrumentation. [remote sensing
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1978-01-01
A selection of NASA Earth observations programs are reviewed, emphasizing hardware capabilities. Sampling theory, noise and detection considerations, and image evaluation are discussed for remote sensor imagery. Vision and perception are considered, leading to numerical image processing. The use of multispectral scanners and of multispectral data processing systems, including digital image processing, is depicted. Multispectral sensing and analysis in application with land use and geographical data systems are also covered.
NASA Astrophysics Data System (ADS)
Martinez, J. D.; Benlloch, J. M.; Cerda, J.; Lerche, Ch. W.; Pavon, N.; Sebastia, A.
2004-06-01
This paper is framed into the Positron Emission Mammography (PEM) project, whose aim is to develop an innovative gamma ray sensor for early breast cancer diagnosis. Currently, breast cancer is detected using low-energy X-ray screening. However, functional imaging techniques such as PET/FDG could be employed to detect breast cancer and track disease changes with greater sensitivity. Furthermore, a small and less expensive PET camera can be utilized minimizing main problems of whole body PET. To accomplish these objectives, we are developing a new gamma ray sensor based on a newly released photodetector. However, a dedicated PEM detector requires an adequate data acquisition (DAQ) and processing system. The characterization of gamma events needs a free-running analog-to-digital converter (ADC) with sampling rates of more than 50 Ms/s and must achieve event count rates up to 10 MHz. Moreover, comprehensive data processing must be carried out to obtain event parameters necessary for performing the image reconstruction. A new generation digital signal processor (DSP) has been used to comply with these requirements. This device enables us to manage the DAQ system at up to 80 Ms/s and to execute intensive calculi over the detector signals. This paper describes our designed DAQ and processing architecture whose main features are: very high-speed data conversion, multichannel synchronized acquisition with zero dead time, a digital triggering scheme, and high throughput of data with an extensive optimization of the signal processing algorithms.
Specification and preliminary design of the CARTA system for satellite cartography
NASA Technical Reports Server (NTRS)
Machadoesilva, A. J. F. (Principal Investigator); Neto, G. C.; Serra, P. R. M.; Souza, R. C. M.; Mitsuo, Fernando Augusta, II
1984-01-01
Digital imagery acquired by satellite have inherent geometrical distortion due to sensor characteristics and to platform variations. In INPE a software system for geometric correction of LANDSAT MSS imagery is under development. Such connected imagery will be useful for map generation. Important examples are the generation of LANDSAT image-charts for the Amazon region and the possibility of integrating digital satellite imagery into a Geographic Information System.
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-12-26
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.
Co-Registration Between Multisource Remote-Sensing Images
NASA Astrophysics Data System (ADS)
Wu, J.; Chang, C.; Tsai, H.-Y.; Liu, M.-C.
2012-07-01
Image registration is essential for geospatial information systems analysis, which usually involves integrating multitemporal and multispectral datasets from remote optical and radar sensors. An algorithm that deals with feature extraction, keypoint matching, outlier detection and image warping is experimented in this study. The methods currently available in the literature rely on techniques, such as the scale-invariant feature transform, between-edge cost minimization, normalized cross correlation, leasts-quares image matching, random sample consensus, iterated data snooping and thin-plate splines. Their basics are highlighted and encoded into a computer program. The test images are excerpts from digital files created by the multispectral SPOT-5 and Formosat-2 sensors, and by the panchromatic IKONOS and QuickBird sensors. Suburban areas, housing rooftops, the countryside and hilly plantations are studied. The co-registered images are displayed with block subimages in a criss-cross pattern. Besides the imagery, the registration accuracy is expressed by the root mean square error. Toward the end, this paper also includes a few opinions on issues that are believed to hinder a correct correspondence between diverse images.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
Capturing latent fingerprints from metallic painted surfaces using UV-VIS spectroscope
NASA Astrophysics Data System (ADS)
Makrushin, Andrey; Scheidat, Tobias; Vielhauer, Claus
2015-03-01
In digital crime scene forensics, contactless non-destructive detection and acquisition of latent fingerprints by means of optical devices such as a high-resolution digital camera, confocal microscope, or chromatic white-light sensor is the initial step prior to destructive chemical development. The applicability of an optical sensor to digitalize latent fingerprints primarily depends on reflection properties of a substrate. Metallic painted surfaces, for instance, pose a problem for conventional sensors which make use of visible light. Since metallic paint is a semi-transparent layer on top of the surface, visible light penetrates it and is reflected off of the metallic flakes randomly disposed in the paint. Fingerprint residues do not impede light beams making ridges invisible. Latent fingerprints can be revealed, however, using ultraviolet light which does not penetrate the paint. We apply a UV-VIS spectroscope that is capable of capturing images within the range from 163 to 844 nm using 2048 discrete levels. We empirically show that latent fingerprints left behind on metallic painted surfaces become clearly visible within the range from 205 to 385 nm. Our proposed streakiness score feature determining the proportion of a ridge-valley pattern in an image is applied for automatic assessment of a fingerprint's visibility and distinguishing between fingerprint and empty regions. The experiments are carried out with 100 fingerprint and 100 non-fingerprint samples.
Superresolution with the focused plenoptic camera
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew
2011-03-01
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
Changing requirements and solutions for unattended ground sensors
NASA Astrophysics Data System (ADS)
Prado, Gervasio; Johnson, Robert
2007-10-01
Unattended Ground Sensors (UGS) were first used to monitor Viet Cong activity along the Ho Chi Minh Trail in the 1960's. In the 1980's, significant improvement in the capabilities of UGS became possible with the development of digital signal processors; this led to their use as fire control devices for smart munitions (for example: the Wide Area Mine) and later to monitor the movements of mobile missile launchers. In these applications, the targets of interest were large military vehicles with strong acoustic, seismic and magnetic signatures. Currently, the requirements imposed by new terrorist threats and illegal border crossings have changed the emphasis to the monitoring of light vehicles and foot traffic. These new requirements have changed the way UGS are used. To improve performance against targets with lower emissions, sensors are used in multi-modal arrangements. Non-imaging sensors (acoustic, seismic, magnetic and passive infrared) are now being used principally as activity sensors to cue imagers and remote cameras. The availability of better imaging technology has made imagers the preferred source of "actionable intelligence". Infrared cameras are now based on un-cooled detector-arrays that have made their application in UGS possible in terms of their cost and power consumption. Visible light imagers are also more sensitive extending their utility well beyond twilight. The imagers are equipped with sophisticated image processing capabilities (image enhancement, moving target detection and tracking, image compression). Various commercial satellite services now provide relatively inexpensive long-range communications and the Internet provides fast worldwide access to the data.
Object-based landslide mapping on satellite images from different sensors
NASA Astrophysics Data System (ADS)
Hölbling, Daniel; Friedl, Barbara; Eisank, Clemens; Blaschke, Thomas
2015-04-01
Several studies have proven that object-based image analysis (OBIA) is a suitable approach for landslide mapping using remote sensing data. Mostly, optical satellite images are utilized in combination with digital elevation models (DEMs) for semi-automated mapping. The ability of considering spectral, spatial, morphometric and contextual features in OBIA constitutes a significant advantage over pixel-based methods, especially when analysing non-uniform natural phenomena such as landslides. However, many of the existing knowledge-based OBIA approaches for landslide mapping are rather complex and are tailored to specific data sets. These restraints lead to a lack of transferability of OBIA mapping routines. The objective of this study is to develop an object-based approach for landslide mapping that is robust against changing input data with different resolutions, i.e. optical satellite imagery from various sensors. Two study sites in Taiwan were selected for developing and testing the landslide mapping approach. One site is located around the Baolai village in the Huaguoshan catchment in the southern-central part of the island, the other one is a sub-area of the Taimali watershed in Taitung County near the south-eastern Pacific coast. Both areas are regularly affected by severe landslides and debris flows. A range of very high resolution (VHR) optical satellite images was used for the object-based mapping of landslides and for testing the transferability across different sensors and resolutions: (I) SPOT-5, (II) Formosat-2, (III) QuickBird, and (IV) WorldView-2. Additionally, a digital elevation model (DEM) with 5 m spatial resolution and its derived products (e.g. slope, plan curvature) were used for supporting the semi-automated mapping, particularly for differentiating source areas and accumulation areas according to their morphometric characteristics. A focus was put on the identification of comparatively stable parameters (e.g. relative indices), which could be transferred to the different satellite images. The presence of bare ground was assumed to be an evidence for the occurrence of landslides. For separating vegetated from non-vegetated areas the Normalized Difference Vegetation Index (NDVI) was primarily used. Each image was divided into two respective parts based on an automatically calculated NDVI threshold value in eCognition (Trimble) software by combining the homogeneity criterion of multiresolution segmentation and histogram-based methods, so that heterogeneity is increased to a maximum. Expert knowledge models, which depict features and thresholds that are usually used by experts for digital landslide mapping, were considered for refining the classification. The results were compared to the respective results from visual image interpretation (i.e. manually digitized reference polygons for each image), which were produced by an independent local expert. By that, the spatial overlaps as well as under- and over-estimated areas were identified and the performance of the approach in relation to each sensor was evaluated. The presented method can complement traditional manual mapping efforts. Moreover, it contributes to current developments for increasing the transferability of semi-automated OBIA approaches and for improving the efficiency of change detection approaches across multi-sensor imagery.
Digital radiography: a survey of pediatric dentists.
Russo, Julie M; Russo, James A; Guelmann, Marcio
2006-01-01
The purpose of this study was to: (1) determine the popularity of digital radiography among members of the American Academy of Pediatric Dentistry (AAPD); and (2) report the most common systems in use. An AAPD-approved, voluntary, and anonymous electronic survey was developed and sent to 923 board certified pediatric dentists. Years in practice and in-office x-ray technology (digital or conventional) were inquired about initially. If negative for the use of digital radiography, future consideration for converting to digital radiography was ascertained. For positive responses, more in-depth information was requested. Information on type of system (sensor or phosphor plate), user friendliness, diagnostic ability, patient's comfort, general costs, durability, and parental and overall satisfaction was collected. For most of the questions, a 5-point assessment scale was used. Opportunity for additional comments was provided upon survey completion. Data was analyzed using descriptive statistics. A 32% (296/923) response rate was obtained. Twenty-six percent of practitioners (78/296) implemented digital radiography in their practices, whereas 71% considered future acquisition. Similar distribution for sensor and phosphor plate users was found. Sensor technology was reported to produce faster images, but was less tolerable by young children due to size and thickness. Phosphor plates were considered more children friendly, less expensive, and less durable. Parental satisfaction was very high with great marketing value. Picture quality was comparable to conventional film. Overall, digital radiography users would recommend it to other pediatric dentists. Digital radiography is not yet popular among pediatric dentists. Cost reduction and technology advancement may enhance utilization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham
2015-12-01
Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-componentmore » velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham
2015-12-01
Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-componentmore » velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.« less
Overview of CMOS process and design options for image sensor dedicated to space applications
NASA Astrophysics Data System (ADS)
Martin-Gonthier, P.; Magnan, P.; Corbiere, F.
2005-10-01
With the growth of huge volume markets (mobile phones, digital cameras...) CMOS technologies for image sensor improve significantly. New process flows appear in order to optimize some parameters such as quantum efficiency, dark current, and conversion gain. Space applications can of course benefit from these improvements. To illustrate this evolution, this paper reports results from three technologies that have been evaluated with test vehicles composed of several sub arrays designed with some space applications as target. These three technologies are CMOS standard, improved and sensor optimized process in 0.35μm generation. Measurements are focussed on quantum efficiency, dark current, conversion gain and noise. Other measurements such as Modulation Transfer Function (MTF) and crosstalk are depicted in [1]. A comparison between results has been done and three categories of CMOS process for image sensors have been listed. Radiation tolerance has been also studied for the CMOS improved process in the way of hardening the imager by design. Results at 4, 15, 25 and 50 krad prove a good ionizing dose radiation tolerance applying specific techniques.
A Review of Digital Image Correlation Applied to Structura Dynamics
NASA Astrophysics Data System (ADS)
Niezrecki, Christopher; Avitabile, Peter; Warren, Christopher; Pingle, Pawan; Helfrick, Mark
2010-05-01
A significant amount of interest exists in performing non-contacting, full-field surface velocity measurement. For many years traditional non-contacting surface velocity measurements have been made by using scanning Doppler laser vibrometry, shearography, pulsed laser interferometry, pulsed holography, or an electronic speckle pattern interferometer (ESPI). Three dimensional (3D) digital image correlation (DIC) methods utilize the alignment of a stereo pair of images to obtain full-field geometry data, in three dimensions. Information about the change in geometry of an object over time can be found by comparing a sequence of images and virtual strain gages (or position sensors) can be created over the entire visible surface of the object of interest. Digital imaging techniques were first developed in the 1980s but the technology has only recently been exploited in industry and research due to the advances of digital cameras and personal computers. The use of DIC for structural dynamic measurement has only very recently been investigated. Within this paper, the advantages and limits of using DIC for dynamic measurement are reviewed. Several examples of using DIC for dynamic measurement are presented on several vibrating and rotating structures.
Hadamard multimode optical imaging transceiver
Cooke, Bradly J; Guenther, David C; Tiee, Joe J; Kellum, Mervyn J; Olivas, Nicholas L; Weisse-Bernstein, Nina R; Judd, Stephen L; Braun, Thomas R
2012-10-30
Disclosed is a method and system for simultaneously acquiring and producing results for multiple image modes using a common sensor without optical filtering, scanning, or other moving parts. The system and method utilize the Walsh-Hadamard correlation detection process (e.g., functions/matrix) to provide an all-binary structure that permits seamless bridging between analog and digital domains. An embodiment may capture an incoming optical signal at an optical aperture, convert the optical signal to an electrical signal, pass the electrical signal through a Low-Noise Amplifier (LNA) to create an LNA signal, pass the LNA signal through one or more correlators where each correlator has a corresponding Walsh-Hadamard (WH) binary basis function, calculate a correlation output coefficient for each correlator as a function of the corresponding WH binary basis function in accordance with Walsh-Hadamard mathematical principles, digitize each of the correlation output coefficient by passing each correlation output coefficient through an Analog-to-Digital Converter (ADC), and performing image mode processing on the digitized correlation output coefficients as desired to produce one or more image modes. Some, but not all, potential image modes include: multi-channel access, temporal, range, three-dimensional, and synthetic aperture.
The National Map - Orthoimagery
Mauck, James; Brown, Kim; Carswell, William J.
2009-01-01
Orthorectified digital aerial photographs and satellite images of 1-meter (m) pixel resolution or finer make up the orthoimagery component of The National Map. The process of orthorectification removes feature displacements and scale variations caused by terrain relief and sensor geometry. The result is a combination of the image characteristics of an aerial photograph or satellite image and the geometric qualities of a map. These attributes allow users to: *Measure distance *Calculate areas *Determine shapes of features *Calculate directions *Determine accurate coordinates *Determine land cover and use *Perform change detection *Update maps The standard digital orthoimage is a 1-m or finer resolution, natural color or color infra-red product. Most are now produced as GeoTIFFs and accompanied by a Federal Geographic Data Committee (FGDC)-compliant metadata file. The primary source for 1-m data is the National Agriculture Imagery Program (NAIP) leaf-on imagery. The U.S. Geological Survey (USGS) utilizes NAIP imagery as the image layer on its 'Digital- Map' - a new generation of USGS topographic maps (http://nationalmap.gov/digital_map). However, many Federal, State, and local governments and organizations require finer resolutions to meet a myriad of needs. Most of these images are leaf-off, natural-color products at resolutions of 1-foot (ft) or finer.
Very-large-area CCD image sensors: concept and cost-effective research
NASA Astrophysics Data System (ADS)
Bogaart, E. W.; Peters, I. M.; Kleimann, A. C.; Manoury, E. J. P.; Klaassens, W.; de Laat, W. T. F. M.; Draijer, C.; Frost, R.; Bosiers, J. T.
2009-01-01
A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability of multi-layer reticles, the sensor development is cost effective for prototyping. Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained. Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography demands.
Simple Colorimetric Sensor for Trinitrotoluene Testing
NASA Astrophysics Data System (ADS)
Samanman, S.; Masoh, N.; Salah, Y.; Srisawat, S.; Wattanayon, R.; Wangsirikul, P.; Phumivanichakit, K.
2017-02-01
A simple operating colorimetric sensor for trinitrotoluene (TNT) determination using a commercial scanner as a captured image was designed. The sensor is based on the chemical reaction between TNT and sodium hydroxide reagent to produce the color change within 96 well plates, which observed finally, recorded using a commercial scanner. The intensity of the color change increased with increase in TNT concentration and could easily quantify the concentration of TNT by digital image analysis using the Image J free software. Under optimum conditions, the sensor provided a linear dynamic range between 0.20 and 1.00 mg mL-1(r = 0.9921) with a limit of detection of 0.10± 0.01 mg mL-1. The relative standard deviation for eight experiments for the sensitivity was 3.8%. When applied for the analysis of TNT in two soil extract samples, the concentrations were found to be non-detectable to 0.26±0.04 mg mL-1. The obtained recovery values (93-95%) were acceptable for soil samples tested.
SSUSI-Lite: a far-ultraviolet hyper-spectral imager for space weather remote sensing
NASA Astrophysics Data System (ADS)
Ogorzalek, Bernard; Osterman, Steven; Carlsson, Uno; Grey, Matthew; Hicks, John; Hourani, Ramsey; Kerem, Samuel; Marcotte, Kathryn; Parker, Charles; Paxton, Larry J.
2015-09-01
SSUSI-Lite is a far-ultraviolet (115-180nm) hyperspectral imager for monitoring space weather. The SSUSI and GUVI sensors, its predecessors, have demonstrated their value as space weather monitors. SSUSI-Lite is a refresh of the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) design that has flown on the Defense Meteorological Satellite Program (DMSP) spacecraft F16 through F19. The refresh updates the 25-year-old design and insures that the next generation of SSUSI/GUVI sensors can be accommodated on any number of potential platforms. SSUSI-Lite maintains the same optical layout as SSUSI, includes updates to key functional elements, and reduces the sensor volume, mass, and power requirements. SSUSI-Lite contains an improved scanner design that results in precise mirror pointing and allows for variable scan profiles. The detector electronics have been redesigned to employ all digital pulse processing. The largest decrease in volume, mass, and power has been obtained by consolidating all control and power electronics into one data processing unit.
Highly curved image sensors: a practical approach for improved optical performance
NASA Astrophysics Data System (ADS)
Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff
2017-06-01
The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30$^\\circ$ subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.
The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2017-02-01
Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.
Modis, SeaWIFS, and Pathfinder funded activities
NASA Technical Reports Server (NTRS)
Evans, Robert H.
1995-01-01
MODIS (Moderate Resolution Imaging Spectrometer), SeaWIFS (Sea-viewing Wide Field Sensor), Pathfinder, and DSP (Digital Signal Processor) objectives are summarized. An overview of current progress is given for the automatic processing database, client/server status, matchup database, and DSP support.
Test Equipment and Method to Characterize a SWIR Digital Imaging System
2014-06-01
based on Gallium Arsenide (GaAs) detectors are sensitive in the visible and near infrared (NIR) bands, and used only at night. They produce images from... current from the silicon sensor located on the sphere. The irradiance responsivity, Rn, is the ratio of the silicon detector current and the absolute...silicon detector currents , in accordance with equation 1: ( , ,)[ 2⁄ ] = [] ( ,
Research-grade CMOS image sensors for demanding space applications
NASA Astrophysics Data System (ADS)
Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre
2004-06-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid-90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.
Research-grade CMOS image sensors for demanding space applications
NASA Astrophysics Data System (ADS)
Saint-Pé, Olivier; Tulet, Michel; Davancens, Robert; Larnaudie, Franck; Magnan, Pierre; Corbière, Franck; Martin-Gonthier, Philippe; Belliot, Pierre
2017-11-01
Imaging detectors are key elements for optical instruments and sensors on board space missions dedicated to Earth observation (high resolution imaging, atmosphere spectroscopy...), Solar System exploration (micro cameras, guidance for autonomous vehicle...) and Universe observation (space telescope focal planes, guiding sensors...). This market has been dominated by CCD technology for long. Since the mid- 90s, CMOS Image Sensors (CIS) have been competing with CCDs for more and more consumer domains (webcams, cell phones, digital cameras...). Featuring significant advantages over CCD sensors for space applications (lower power consumption, smaller system size, better radiations behaviour...), CMOS technology is also expanding in this field, justifying specific R&D and development programs funded by national and European space agencies (mainly CNES, DGA, and ESA). All along the 90s and thanks to their increasingly improving performances, CIS have started to be successfully used for more and more demanding applications, from vision and control functions requiring low-level performances to guidance applications requiring medium-level performances. Recent technology improvements have made possible the manufacturing of research-grade CIS that are able to compete with CCDs in the high-performances arena. After an introduction outlining the growing interest of optical instruments designers for CMOS image sensors, this talk will present the existing and foreseen ways to reach high-level electro-optics performances for CIS. The developments of CIS prototypes built using an imaging CMOS process and of devices based on improved designs will be presented.
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
Secure and Efficient Transmission of Hyperspectral Images for Geosciences Applications
NASA Astrophysics Data System (ADS)
Carpentieri, Bruno; Pizzolante, Raffaele
2017-12-01
Hyperspectral images are acquired through air-borne or space-borne special cameras (sensors) that collect information coming from the electromagnetic spectrum of the observed terrains. Hyperspectral remote sensing and hyperspectral images are used for a wide range of purposes: originally, they were developed for mining applications and for geology because of the capability of this kind of images to correctly identify various types of underground minerals by analysing the reflected spectrums, but their usage has spread in other application fields, such as ecology, military and surveillance, historical research and even archaeology. The large amount of data obtained by the hyperspectral sensors, the fact that these images are acquired at a high cost by air-borne sensors and that they are generally transmitted to a base, makes it necessary to provide an efficient and secure transmission protocol. In this paper, we propose a novel framework that allows secure and efficient transmission of hyperspectral images, by combining a reversible invisible watermarking scheme, used in conjunction with digital signature techniques, and a state-of-art predictive-based lossless compression algorithm.
Sadowski, Franklin G.; Covington, Steven J.
1987-01-01
Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT highresolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear powerplant emergency at Chernobyl in the Soviet Ukraine. The images demonstrate the unique interpretive capabilities provided by the numerous spectral bands of the Thematic Mapper and the high spatial resolution of the SPOT HRV sensor.
Design, optimization and evaluation of a "smart" pixel sensor array for low-dose digital radiography
NASA Astrophysics Data System (ADS)
Wang, Kai; Liu, Xinghui; Ou, Hai; Chen, Jun
2016-04-01
Amorphous silicon (a-Si:H) thin-film transistors (TFTs) have been widely used to build flat-panel X-ray detectors for digital radiography (DR). As the demand for low-dose X-ray imaging grows, a detector with high signal-to-noise-ratio (SNR) pixel architecture emerges. "Smart" pixel is intended to use a dual-gate photosensitive TFT for sensing, storage, and switch. It differs from a conventional passive pixel sensor (PPS) and active pixel sensor (APS) in that all these three functions are combined into one device instead of three separate units in a pixel. Thus, it is expected to have high fill factor and high spatial resolution. In addition, it utilizes the amplification effect of the dual-gate photosensitive TFT to form a one-transistor APS that leads to a potentially high SNR. This paper addresses the design, optimization and evaluation of the smart pixel sensor and array for low-dose DR. We will design and optimize the smart pixel from the scintillator to TFT levels and validate it through optical and electrical simulation and experiments of a 4x4 sensor array.
Hall, Elise M; Thurow, Brian S; Guildenbecher, Daniel R
2016-08-10
Digital in-line holography (DIH) and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with DIH. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and DIH successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-component velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. In contrast, plenoptic imaging allows for a simpler experimental configuration and, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments.
Radiopacity of endodontic materials on film and a digital sensor.
Rasimick, Brian J; Shah, Rinal P; Musikant, Barry Lee; Deutsch, Allan S
2007-09-01
The purpose of this study was to compare the radiographic appearance of 12 endodontic materials as visualized on either Kodak Ultra-speed D speed film (Eastman Kodak Company, Rochester, NY) or a Gendex eHD digital sensor (Gendex Dental Systems, Milan, Italy). Ten discs of each material were radiographed alongside an aluminum alloy 1100 (Alcoa, Pittsburgh, PA) stepwedge that was used for reference. For every radiograph, the average grayscale value of the material was converted into absorbance notation and compared with that of the reference stepwedge in order to determine the equivalent radiopacity in terms of millimeters of Al 1100 per millimeter of material. Two-way repeated-measures analysis of variance testing detected significant differences with respect to imaging system, material, and the interaction of the two factors (p < 0.001). The difference in a material's radiopacity as measured on the digital sensor compared with film was greater than 10% for 4 of the 12 materials and over 40% for InnoEndo (Heraus Kulzer, Armonk, NY). It was speculated that barium fillers cause this effect.
NASA Technical Reports Server (NTRS)
Langevin, Maurice L. (Inventor); Moynihan, Philip I. (Inventor)
2000-01-01
An optical-to-tactile translator provides an aid for the visually impaired by translating a near-field scene to a tactile signal corresponding to said near-field scene. An optical sensor using a plurality of active pixel sensors (APS) converts the optical image within the near-field scene to a digital signal. The digital signal is then processed by a microprocessor and a simple shape signal is generated based on the digital signal. The shape signal is then communicated to a tactile transmitter where the shape signal is converted into a tactile signal using a series of contacts. The shape signal may be an outline of the significant shapes determined in the near-field scene, or the shape signal may comprise a simple symbolic representation of common items encountered repeatedly. The user is thus made aware of the unseen near-field scene, including potential obstacles and dangers, through a series of tactile contacts. In a preferred embodiment, a range determining device such as those commonly found on auto-focusing cameras is included to limit the distance that the optical sensor interprets the near-field scene.
Kruse, Fred A.
1984-01-01
Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.
NASA Astrophysics Data System (ADS)
Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi
2001-04-01
We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.
NASA Astrophysics Data System (ADS)
Kendrick, Stephen E.; Harwit, Alex; Kaplan, Michael; Smythe, William D.
2007-09-01
An MWIR TDI (Time Delay and Integration) Imager and Spectrometer (MTIS) instrument for characterizing from orbit the moons of Jupiter and Saturn is proposed. Novel to this instrument is the planned implementation of a digital TDI detector array and an innovative imaging/spectroscopic architecture. Digital TDI enables a higher SNR for high spatial resolution surface mapping of Titan and Enceladus and for improved spectral discrimination and resolution at Europa. The MTIS imaging/spectroscopic architecture combines a high spatial resolution coarse wavelength resolution imaging spectrometer with a hyperspectral sensor to spectrally decompose a portion of the data adjacent to the data sampled in the imaging spectrometer. The MTIS instrument thus maps with high spatial resolution a planetary object while spectrally decomposing enough of the data that identification of the constituent materials is highly likely. Additionally, digital TDI systems have the ability to enable the rejection of radiation induced spikes in high radiation environments (Europa) and the ability to image in low light levels (Titan and Enceladus). The ability to image moving objects that might be missed utilizing a conventional TDI system is an added advantage and is particularly important for characterizing atmospheric effects and separating atmospheric and surface components. This can be accomplished with on-orbit processing or collecting and returning individual non co-added frames.
Compact SPAD-Based Pixel Architectures for Time-Resolved Image Sensors
Perenzoni, Matteo; Pancheri, Lucio; Stoppa, David
2016-01-01
This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size (<25 μm) and high fill factor (>20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved. PMID:27223284
Real-time image processing of TOF range images using a reconfigurable processor system
NASA Astrophysics Data System (ADS)
Hussmann, S.; Knoll, F.; Edeler, T.
2011-07-01
During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.
A CMOS high speed imaging system design based on FPGA
NASA Astrophysics Data System (ADS)
Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui
2015-10-01
CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.
Miniaturized multiwavelength digital holography sensor for extensive in-machine tool measurement
NASA Astrophysics Data System (ADS)
Seyler, Tobias; Fratz, Markus; Beckmann, Tobias; Bertz, Alexander; Carl, Daniel
2017-06-01
In this paper we present a miniaturized digital holographic sensor (HoloCut) for operation inside a machine tool. With state-of-the-art 3D measurement systems, short-range structures such as tool marks cannot be resolved inside a machine tool chamber. Up to now, measurements had to be conducted outside the machine tool and thus processing data are generated offline. The sensor presented here uses digital multiwavelength holography to get 3D-shape-information of the machined sample. By using three wavelengths, we get a large artificial wavelength with a large unambiguous measurement range of 0.5mm and achieve micron repeatability even in the presence of laser speckles on rough surfaces. In addition, a digital refocusing algorithm based on phase noise is implemented to extend the measurement range beyond the limits of the artificial wavelength and geometrical depth-of-focus. With complex wave field propagation, the focus plane can be shifted after the camera images have been taken and a sharp image with extended depth of focus is constructed consequently. With 20mm x 20mm field of view the sensor enables measurement of both macro- and micro-structure (such as tool marks) with an axial resolution of 1 µm, lateral resolution of 7 µm and consequently allows processing data to be generated online which in turn qualifies it as a machine tool control. To make HoloCut compact enough for operation inside a machining center, the beams are arranged in two planes: The beams are split into reference beam and object beam in the bottom plane and combined onto the camera in the top plane later on. Using a mechanical standard interface according to DIN 69893 and having a very compact size of 235mm x 140mm x 215mm (WxHxD) and a weight of 7.5 kg, HoloCut can be easily integrated into different machine tools and extends no more in height than a typical processing tool.
Automated site characterization for robotic sample acquisition systems
NASA Astrophysics Data System (ADS)
Scholl, Marija S.; Eberlein, Susan J.
1993-04-01
A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.
Outdoor surface temperature measurement: ground truth or lie?
NASA Astrophysics Data System (ADS)
Skauli, Torbjorn
2004-08-01
Contact surface temperature measurement in the field is essential in trials of thermal imaging systems and camouflage, as well as for scene modeling studies. The accuracy of such measurements is challenged by environmental factors such as sun and wind, which induce temperature gradients around a surface sensor and lead to incorrect temperature readings. In this work, a simple method is used to test temperature sensors under conditions representative of a surface whose temperature is determined by heat exchange with the environment. The tested sensors are different types of thermocouples and platinum thermistors typically used in field trials, as well as digital temperature sensors. The results illustrate that the actual measurement errors can be much larger than the specified accuracy of the sensors. The measurement error typically scales with the difference between surface temperature and ambient air temperature. Unless proper care is taken, systematic errors can easily reach 10% of this temperature difference, which is often unacceptable. Reasonably accurate readings are obtained using a miniature platinum thermistor. Thermocouples can perform well on bare metal surfaces if the connection to the surface is highly conductive. It is pointed out that digital temperature sensors have many advantages for field trials use.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)
1982-01-01
Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.
NASA Astrophysics Data System (ADS)
Jantzen, Connie; Slagle, Rick
1997-05-01
The distinction between exposure time and sample rate is often the first point raised in any discussion of high speed imaging. Many high speed events require exposure times considerably shorter than those that can be achieved solely by the sample rate of the camera, where exposure time equals 1/sample rate. Gating, a method of achieving short exposure times in digital cameras, is often difficult to achieve for exposure time requirements shorter than 100 microseconds. This paper discusses the advantages and limitations of using the short duration light pulse of a near infrared laser with high speed digital imaging systems. By closely matching the output wavelength of the pulsed laser to the peak near infrared response of current sensors, high speed image capture can be accomplished at very low (visible) light levels of illumination. By virtue of the short duration light pulse, adjustable to as short as two microseconds, image capture of very high speed events can be achieved at relatively low sample rates of less than 100 pictures per second, without image blur. For our initial investigations, we chose a ballistic subject. The results of early experimentation revealed the limitations of applying traditional ballistic imaging methods when using a pulsed infrared lightsource with a digital imaging system. These early disappointing results clarified the need to further identify the unique system characteristics of the digital imager and pulsed infrared combination. It was also necessary to investigate how the infrared reflectance and transmittance of common materials affects the imaging process. This experimental work yielded a surprising, successful methodology which will prove useful in imaging ballistic and weapons tests, as well as forensics, flow visualizations, spray pattern analyses, and nocturnal animal behavioral studies.
High-resolution CCD imaging alternatives
NASA Astrophysics Data System (ADS)
Brown, D. L.; Acker, D. E.
1992-08-01
High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Editor)
1988-01-01
The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)
1987-01-01
The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.
Object recognition of ladar with support vector machine
NASA Astrophysics Data System (ADS)
Sun, Jian-Feng; Li, Qi; Wang, Qi
2005-01-01
Intensity, range and Doppler images can be obtained by using laser radar. Laser radar can detect much more object information than other detecting sensor, such as passive infrared imaging and synthetic aperture radar (SAR), so it is well suited as the sensor of object recognition. Traditional method of laser radar object recognition is extracting target features, which can be influenced by noise. In this paper, a laser radar recognition method-Support Vector Machine is introduced. Support Vector Machine (SVM) is a new hotspot of recognition research after neural network. It has well performance on digital written and face recognition. Two series experiments about SVM designed for preprocessing and non-preprocessing samples are performed by real laser radar images, and the experiments results are compared.
Spatial super-resolution of colored images by micro mirrors
NASA Astrophysics Data System (ADS)
Dahan, Daniel; Yaacobi, Ami; Pinsky, Ephraim; Zalevsky, Zeev
2018-06-01
In this paper, we present two methods of dealing with the geometric resolution limit of color imaging sensors. It is possible to overcome the pixel size limit by adding a digital micro-mirror device component on the intermediate image plane of an optical system, and adapting its pattern in a computerized manner before sampling each frame. The full RGB image can be reconstructed from the Bayer camera by building a dedicated optical design, or by adjusting the demosaicing process to the special format of the enhanced image.
Correction And Use Of Jitter In Television Images
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.
1989-01-01
Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.
NASA Technical Reports Server (NTRS)
Green, Robert O.; Conel, James E.; Vandenbosch, Jeannette; Shimada, Masanobu
1993-01-01
We describe an experiment to calibrate the optical sensor (OPS) on board the Japanese Earth Resources Satellite-1 with data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). On 27 Aug. 1992 both the OPS and AVIRIS acquired data concurrently over a calibration target on the surface of Rogers Dry Lake, California. The high spectral resolution measurements of AVIRIS have been convolved to the spectral response curves of the OPS. These data in conjunction with the corresponding OPS digitized numbers have been used to generate the radiometric calibration coefficients for the eight OPS bands. This experiment establishes the suitability of AVIRIS for the calibration of spaceborne sensors in the 400 to 2500 nm spectral region.
Optoelectronic Sensor System for Guidance in Docking
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Bryan, Thomas C.; Book, Michael L.; Jackson, John L.
2004-01-01
The Video Guidance Sensor (VGS) system is an optoelectronic sensor that provides automated guidance between two vehicles. In the original intended application, the two vehicles would be spacecraft docking together, but the basic principles of design and operation of the sensor are applicable to aircraft, robots, vehicles, or other objects that may be required to be aligned for docking, assembly, resupply, or precise separation. The system includes a sensor head containing a monochrome charge-coupled- device video camera and pulsed laser diodes mounted on the tracking vehicle, and passive reflective targets on the tracked vehicle. The lasers illuminate the targets, and the resulting video images of the targets are digitized. Then, from the positions of the digitized target images and known geometric relationships among the targets, the relative position and orientation of the vehicles are computed. As described thus far, the VGS system is based on the same principles as those of the system described in "Improved Video Sensor System for Guidance in Docking" (MFS-31150), NASA Tech Briefs, Vol. 21, No. 4 (April 1997), page 9a. However, the two systems differ in the details of design and operation. The VGS system is designed to operate with the target completely visible within a relative-azimuth range of +/-10.5deg and a relative-elevation range of +/-8deg. The VGS acquires and tracks the target within that field of view at any distance from 1.0 to 110 m and at any relative roll, pitch, and/or yaw angle within +/-10deg. The VGS produces sets of distance and relative-orientation data at a repetition rate of 5 Hz. The software of this system also accommodates the simultaneous operation of two sensors for redundancy
Use of a color CMOS camera as a colorimeter
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Redford, Gary R.
2006-08-01
In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.
NASA Technical Reports Server (NTRS)
2000-01-01
Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.
SSME leak detection feasibility investigation by utilization of infrared sensor technology
NASA Technical Reports Server (NTRS)
Shohadaee, Ahmad A.; Crawford, Roger A.
1990-01-01
This investigation examined the potential of using state-of-the-art technology of infrared (IR) thermal imaging systems combined with computer, digital image processing and expert systems for Space Shuttle Main Engines (SSME) propellant path peak detection as an early warning system of imminent engine failure. A low-cost, laboratory experiment was devised and an experimental approach was established. The system was installed, checked out, and data were successfully acquired demonstrating the proof-of-concept. The conclusion from this investigation is that both numerical and experimental results indicate that the leak detection by using infrared sensor technology proved to be feasible for a rocket engine health monitoring system.
Characterizing the scientific potential of satellite sensors. [San Francisco, California
NASA Technical Reports Server (NTRS)
1984-01-01
Analytical and programming support is to be provided to characterize the potential of the LANDSAT thematic mapper (TM) digital imagery for scientific investigations in the Earth sciences and in terrestrial physics. In addition, technical support to define lower atmospheric and terrestrial surface experiments for the space station and technical support to the Research Optical Sensor (ROS) study scientist for advanced studies in remote sensing are to be provided. Eleven radiometric calibration and correction programs are described. Coherent noise and bright target saturation correction are discussed along with image processing on the LAS/VAX and Hp-300/IDIMS. An image of San Francisco, California from TM band 2 is presented.
Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility
NASA Astrophysics Data System (ADS)
Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.
2017-12-01
The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.
Geometric correction and digital elevation extraction using multiple MTI datasets
Mercier, Jeffrey A.; Schowengerdt, Robert A.; Storey, James C.; Smith, Jody L.
2007-01-01
Digital Elevation Models (DEMs) are traditionally acquired from a stereo pair of aerial photographs sequentially captured by an airborne metric camera. Standard DEM extraction techniques can be naturally extended to satellite imagery, but the particular characteristics of satellite imaging can cause difficulties. The spacecraft ephemeris with respect to the ground site during image collects is the most important factor in the elevation extraction process. When the angle of separation between the stereo images is small, the extraction process typically produces measurements with low accuracy, while a large angle of separation can cause an excessive number of erroneous points in the DEM from occlusion of ground areas. The use of three or more images registered to the same ground area can potentially reduce these problems and improve the accuracy of the extracted DEM. The pointing capability of some sensors, such as the Multispectral Thermal Imager (MTI), allows for multiple collects of the same area from different perspectives. This functionality of MTI makes it a good candidate for the implementation of a DEM extraction algorithm using multiple images for improved accuracy. Evaluation of this capability and development of algorithms to geometrically model the MTI sensor and extract DEMs from multi-look MTI imagery are described in this paper. An RMS elevation error of 6.3-meters is achieved using 11 ground test points, while the MTI band has a 5-meter ground sample distance.
Analysis of digital images into energy-angular momentum modes.
Vicent, Luis Edgar; Wolf, Kurt Bernardo
2011-05-01
The measurement of continuous wave fields by a digital (pixellated) screen of sensors can be used to assess the quality of a beam by finding its formant modes. A generic continuous field F(x, y) sampled at an N × N Cartesian grid of point sensors on a plane yields a matrix of values F(q(x), q(y)), where (q(x), q(y)) are integer coordinates. When the approximate rotational symmetry of the input field is important, one may use the sampled Laguerre-Gauss functions, with radial and angular modes (n, m), to analyze them into their corresponding coefficients F(n, m) of energy and angular momentum (E-AM). The sampled E-AM modes span an N²-dimensional space, but are not orthogonal--except for parity. In this paper, we propose the properly orthonormal "Laguerre-Kravchuk" discrete functions Λ(n, m)(q(x), q(y)) as a convenient basis to analyze the sampled beams into their E-AM polar modes, and with them synthesize the input image exactly.
BAE Systems' 17μm LWIR camera core for civil, commercial, and military applications
NASA Astrophysics Data System (ADS)
Lee, Jeffrey; Rodriguez, Christian; Blackwell, Richard
2013-06-01
Seventeen (17) µm pixel Long Wave Infrared (LWIR) Sensors based on vanadium oxide (VOx) micro-bolometers have been in full rate production at BAE Systems' Night Vision Sensors facility in Lexington, MA for the past five years.[1] We introduce here a commercial camera core product, the Airia-MTM imaging module, in a VGA format that reads out in 30 and 60Hz progressive modes. The camera core is architected to conserve power with all digital interfaces from the readout integrated circuit through video output. The architecture enables a variety of input/output interfaces including Camera Link, USB 2.0, micro-display drivers and optional RS-170 analog output supporting legacy systems. The modular board architecture of the electronics facilitates hardware upgrades allow us to capitalize on the latest high performance low power electronics developed for the mobile phones. Software and firmware is field upgradeable through a USB 2.0 port. The USB port also gives users access to up to 100 digitally stored (lossless) images.
Kawahito, Shoji; Seo, Min-Woong
2016-11-06
This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e - rms ) when compared with the CMS gain of two (2.4 e - rms ), or 16 (1.1 e - rms ).
Kawahito, Shoji; Seo, Min-Woong
2016-01-01
This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS) technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs). This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC). The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median): 0.29 e−rms) when compared with the CMS gain of two (2.4 e−rms), or 16 (1.1 e−rms). PMID:27827972
Research on application of photoelectric rotary encoder in space optical remote sensor
NASA Astrophysics Data System (ADS)
Zheng, Jun; Qi, Shao-fan; Wang, Yuan-yuan; Zhang, Zhan-dong
2016-11-01
For space optical remote sensor, especially wide swath detecting sensor, the focusing control system for the focal plane should be well designed to obtain the best image quality. The crucial part of this system is the measuring instrument. For previous implements, the potentiometer, which is essentially a voltage divider, is usually introduced to conduct the position in feedback closed-loop control process system. However, the performances of both electro-mechanical and digital potentiometers is limited in accuracy, temperature coefficients, and scale range. To have a better performance of focal plane moving detection, this article presents a new measuring implement with photoelectric rotary encoder, which consists of the photoelectric conversion system and the signal process system. In this novel focusing control system, the photoelectric conversion system is fixed on main axis, which can transform the angle information into a certain analog signal. Through the signal process system, after analog-to-digital converting and data format processing of the certain analog signal, the focusing control system can receive the digital precision angle position which can be used to deduct the current moving position of the focal plane. For utilization of space optical remote sensor in aerospace areas, the reliability design of photoelectric rotary encoder system should be considered with highest priority. As mentioned above, this photoelectric digital precision angle measurement device is well designed for this real-time control and dynamic measurement system, because its characters of high resolution, high accuracy, long endurance, and easy to maintain.
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-01-01
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765
Zhao, C; Konstantinidis, A C; Zheng, Y; Anaxagoras, T; Speller, R D; Kanicki, J
2015-12-07
Wafer-scale CMOS active pixel sensors (APSs) have been developed recently for x-ray imaging applications. The small pixel pitch and low noise are very promising properties for medical imaging applications such as digital breast tomosynthesis (DBT). In this work, we evaluated experimentally and through modeling the imaging properties of a 50 μm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). A modified cascaded system model was developed for CMOS APS x-ray detectors by taking into account the device nonlinear signal and noise properties. The imaging properties such as modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) were extracted from both measurements and the nonlinear cascaded system analysis. The results show that the DynAMITe x-ray detector achieves a high spatial resolution of 10 mm(-1) and a DQE of around 0.5 at spatial frequencies <1 mm(-1). In addition, the modeling results were used to calculate the image signal-to-noise ratio (SNRi) of microcalcifications at various mean glandular dose (MGD). For an average breast (5 cm thickness, 50% glandular fraction), 165 μm microcalcifications can be distinguished at a MGD of 27% lower than the clinical value (~1.3 mGy). To detect 100 μm microcalcifications, further optimizations of the CMOS APS x-ray detector, image aquisition geometry and image reconstruction techniques should be considered.
NASA Astrophysics Data System (ADS)
Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.
2018-01-01
An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.
Radiometric Characterization Results for the IKONOS, Quickbird, and OrbView-3 Sensor
NASA Technical Reports Server (NTRS)
Holekamp, Kara; Aaron, David; Thome, Kurtis
2006-01-01
Radiometric calibration of commercial imaging satellite products is required to ensure that science and application communities better understand commercial imaging satellite properties. Inaccurate radiometric calibrations can lead to erroneous decisions and invalid conclusions and can limit intercomparisons with other systems. To address this calibration need, the NASA Applied Sciences Directorate (ASD) at Stennis Space Center established a commercial satellite imaging radiometric calibration team consisting of three independent groups: NASA ASD, the University of Arizona Remote Sensing Group, and South Dakota State University. Each group independently determined the absolute radiometric calibration coefficients of available high-spatial-resolution commercial 4-band multispectral products, in the visible though near-infrared spectrum, from GeoEye(tradeMark) (formerly SpaceImaging(Registered TradeMark)) IKONOS, DigitalGlobe(Regitered TradeMark) QuickBird, and GeoEye (formerly ORBIMAGE(Registered TradeMark) OrbView. Each team member employed some variant of reflectance-based vicarious calibration approach, requiring ground-based measurements coincident with image acquisitions and radiative transfer calculations. Several study sites throughout the United States that covered a significant portion of the sensor's dynamic range were employed. Satellite at-sensor radiance values were compared to those estimated by each independent team member to evaluate the sensor's radiometric accuracy. The combined results of this evaluation provide the user community with an independent assessment of these sensors' absolute calibration values.
NASA Astrophysics Data System (ADS)
Bruschini, Claudio; Charbon, Edoardo; Veerappan, Chockalingam; Braga, Leo H. C.; Massari, Nicola; Perenzoni, Matteo; Gasparini, Leonardo; Stoppa, David; Walker, Richard; Erdogan, Ahmet; Henderson, Robert K.; East, Steve; Grant, Lindsay; Játékos, Balázs; Ujhelyi, Ferenc; Erdei, Gábor; Lörincz, Emöke; André, Luc; Maingault, Laurent; Jacolin, David; Verger, L.; Gros d'Aillon, Eric; Major, Peter; Papp, Zoltan; Nemeth, Gabor
2014-05-01
The SPADnet FP7 European project is aimed at a new generation of fully digital, scalable and networked photonic components to enable large area image sensors, with primary target gamma-ray and coincidence detection in (Time-of- Flight) Positron Emission Tomography (PET). SPADnet relies on standard CMOS technology, therefore allowing for MRI compatibility. SPADnet innovates in several areas of PET systems, from optical coupling to single-photon sensor architectures, from intelligent ring networks to reconstruction algorithms. It is built around a natively digital, intelligent SPAD (Single-Photon Avalanche Diode)-based sensor device which comprises an array of 8×16 pixels, each composed of 4 mini-SiPMs with in situ time-to-digital conversion, a multi-ring network to filter, carry, and process data produced by the sensors at 2Gbps, and a 130nm CMOS process enabling mass-production of photonic modules that are optically interfaced to scintillator crystals. A few tens of sensor devices are tightly abutted on a single PCB to form a so-called sensor tile, thanks to TSV (Through Silicon Via) connections to their backside (replacing conventional wire bonding). The sensor tile is in turn interfaced to an FPGA-based PCB on its back. The resulting photonic module acts as an autonomous sensing and computing unit, individually detecting gamma photons as well as thermal and Compton events. It determines in real time basic information for each scintillation event, such as exact time of arrival, position and energy, and communicates it to its peers in the field of view. Coincidence detection does therefore occur directly in the ring itself, in a differed and distributed manner to ensure scalability. The selected true coincidence events are then collected by a snooper module, from which they are transferred to an external reconstruction computer using Gigabit Ethernet.
NASA Technical Reports Server (NTRS)
Chretien, Jean-Loup (Inventor); Lu, Edward T. (Inventor)
2005-01-01
A dynamic optical filtration system and method effectively blocks bright light sources without impairing view of the remainder of the scene. A sensor measures light intensity and position so that selected cells of a shading matrix may interrupt the view of the bright light source by a receptor. A beamsplitter may be used so that the sensor may be located away from the receptor. The shading matrix may also be replaced by a digital micromirror device, which selectively sends image data to the receptor.
NASA Technical Reports Server (NTRS)
Chretien, Jean-Loup (Inventor); Lu, Edward T. (Inventor)
2005-01-01
A dynamic optical filtration system and method effectively blocks bright light sources without impairing view of the remainder of the scene. A sensor measures light intensity and position so that selected cells of a shading matrix may interrupt the view of the bright light source by a receptor. A beamsplitter may be used so that the sensor may be located away from the receptor. The shading matrix may also be replaced by a digital micromirror device, which selectively sends image data to the receptor.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.
NASA Astrophysics Data System (ADS)
Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi
Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.
Yusof, Mohd Yusmiaidil Putera Mohd; Rahman, Nur Liyana Abdul; Asri, Amiza Aqiela Ahmad; Othman, Noor Ilyani; Wan Mokhtar, Ilham
2017-12-01
This study was performed to quantify the repeat rate of imaging acquisitions based on different clinical examinations, and to assess the prevalence of error types in intraoral bitewing and periapical imaging using a digital complementary metal-oxide-semiconductor (CMOS) intraoral sensor. A total of 8,030 intraoral images were retrospectively collected from 3 groups of undergraduate clinical dental students. The type of examination, stage of the procedure, and reasons for repetition were analysed and recorded. The repeat rate was calculated as the total number of repeated images divided by the total number of examinations. The weighted Cohen's kappa for inter- and intra-observer agreement was used after calibration and prior to image analysis. The overall repeat rate on intraoral periapical images was 34.4%. A total of 1,978 repeated periapical images were from endodontic assessment, which included working length estimation (WLE), trial gutta-percha (tGP), obturation, and removal of gutta-percha (rGP). In the endodontic imaging, the highest repeat rate was from WLE (51.9%) followed by tGP (48.5%), obturation (42.2%), and rGP (35.6%). In bitewing images, the repeat rate was 15.1% and poor angulation was identified as the most common cause of error. A substantial level of intra- and interobserver agreement was achieved. The repeat rates in this study were relatively high, especially for certain clinical procedures, warranting training in optimization techniques and radiation protection. Repeat analysis should be performed from time to time to enhance quality assurance and hence deliver high-quality health services to patients.
FITPix COMBO—Timepix detector with integrated analog signal spectrometric readout
NASA Astrophysics Data System (ADS)
Holik, M.; Kraus, V.; Georgiev, V.; Granja, C.
2016-02-01
The hybrid semiconductor pixel detector Timepix has proven a powerful tool in radiation detection and imaging. Energy loss and directional sensitivity as well as particle type resolving power are possible by high resolution particle tracking and per-pixel energy and quantum-counting capability. The spectrometric resolving power of the detector can be further enhanced by analyzing the analog signal of the detector common sensor electrode (also called back-side pulse). In this work we present a new compact readout interface, based on the FITPix readout architecture, extended with integrated analog electronics for the detector's common sensor signal. Integrating simultaneous operation of the digital per-pixel information with the common sensor (called also back-side electrode) analog pulse processing circuitry into one device enhances the detector capabilities and opens new applications. Thanks to noise suppression and built-in electromagnetic interference shielding the common hardware platform enables parallel analog signal spectroscopy on the back side pulse signal with full operation and read-out of the pixelated digital part, the noise level is 600 keV and spectrometric resolution around 100 keV for 5.5 MeV alpha particles. Self-triggering is implemented with delay of few tens of ns making use of adjustable low-energy threshold of the particle analog signal amplitude. The digital pixelated full frame can be thus triggered and recorded together with the common sensor analog signal. The waveform, which is sampled with frequency 100 MHz, can be recorded in adjustable time window including time prior to the trigger level. An integrated software tool provides control, on-line display and read-out of both analog and digital channels. Both the pixelated digital record and the analog waveform are synchronized and written out by common time stamp.
EOS image data processing system definition study
NASA Technical Reports Server (NTRS)
Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.
1973-01-01
The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.
Adaptive recovery of motion blur point spread function from differently exposed images
NASA Astrophysics Data System (ADS)
Albu, Felix; Florea, Corneliu; Drîmbarean, Alexandru; Zamfir, Adrian
2010-01-01
Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of images and many methods for camera motion removal have been developed. Central to all techniques is the correct recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred image to adaptively improve it. The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square (LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images. This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is improved compared to 2 gyro only approach or to blind image de-convolution results.
Hall, Elise M.; Thurow, Brian S.; Guildenbecher, Daniel R.
2016-08-08
Digital in-line holography (DIH) and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with DIH. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and DIH successfully quantify the 3D nature of these particle fields. Furthermore, this includes measurement of the 3D particle position, individual particle sizes, and three-component velocity vectors. Formore » the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1–2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. In contrast, plenoptic imaging allows for a simpler experimental configuration and, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments.« less
A novel weighted-direction color interpolation
NASA Astrophysics Data System (ADS)
Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng
2013-08-01
A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.
Geocoding and stereo display of tropical forest multisensor datasets
NASA Technical Reports Server (NTRS)
Welch, R.; Jordan, T. R.; Luvall, J. C.
1990-01-01
Concern about the future of tropical forests has led to a demand for geocoded multisensor databases that can be used to assess forest structure, deforestation, thermal response, evapotranspiration, and other parameters linked to climate change. In response to studies being conducted at the Braulino Carrillo National Park, Costa Rica, digital satellite and aircraft images recorded by Landsat TM, SPOT HRV, Thermal Infrared Multispectral Scanner, and Calibrated Airborne Multispectral Scanner sensors were placed in register using the Landsat TM image as the reference map. Despite problems caused by relief, multitemporal datasets, and geometric distortions in the aircraft images, registration was accomplished to within + or - 20 m (+ or - 1 data pixel). A digital elevation model constructed from a multisensor Landsat TM/SPOT stereopair proved useful for generating perspective views of the rugged, forested terrain.
Field-portable lensfree tomographic microscope†
Isikman, Serhan O.; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan
2011-01-01
We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (~20 mm3) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ~110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ~50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. PMID:21573311
NASA Technical Reports Server (NTRS)
1973-01-01
Topics discussed include the management and processing of earth resources information, special-purpose processors for the machine processing of remotely sensed data, digital image registration by a mathematical programming technique, the use of remote-sensor data in land classification (in particular, the use of ERTS-1 multispectral scanning data), the use of remote-sensor data in geometrical transformations and mapping, earth resource measurement with the aid of ERTS-1 multispectral scanning data, the use of remote-sensor data in the classification of turbidity levels in coastal zones and in the identification of ecological anomalies, the problem of feature selection and the classification of objects in multispectral images, the estimation of proportions of certain categories of objects, and a number of special systems and techniques. Individual items are announced in this issue.
NASA Technical Reports Server (NTRS)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
SSC Geopositional Assessment of the Advanced Wide Field Sensor
NASA Technical Reports Server (NTRS)
Ross, Kenton
2006-01-01
The geopositional accuracy of the standard geocorrected product from the Advanced Wide Field Sensor (AWiFS) was evaluated using digital orthophoto quarter quadrangles and other reference sources of similar accuracy. Images were analyzed from summer 2004 through spring 2005. Forty to fifty check points were collected manually per scene and analyzed to determine overall circular error, estimates of horizontal bias, and other systematic errors. Measured errors were somewhat higher than the specifications for the data, but they were consistent with the analysis of the distributing vendor.
IR CMOS: near infrared enhanced digital imaging (Presentation Recording)
NASA Astrophysics Data System (ADS)
Pralle, Martin U.; Carey, James E.; Joy, Thomas; Vineis, Chris J.; Palsule, Chintamani
2015-08-01
SiOnyx has demonstrated imaging at light levels below 1 mLux (moonless starlight) at video frame rates with a 720P CMOS image sensor in a compact, low latency camera. Low light imaging is enabled by the combination of enhanced quantum efficiency in the near infrared together with state of the art low noise image sensor design. The quantum efficiency enhancements are achieved by applying Black Silicon, SiOnyx's proprietary ultrafast laser semiconductor processing technology. In the near infrared, silicon's native indirect bandgap results in low absorption coefficients and long absorption lengths. The Black Silicon nanostructured layer fundamentally disrupts this paradigm by enhancing the absorption of light within a thin pixel layer making 5 microns of silicon equivalent to over 300 microns of standard silicon. This results in a demonstrate 10 fold improvements in near infrared sensitivity over incumbent imaging technology while maintaining complete compatibility with standard CMOS image sensor process flows. Applications include surveillance, nightvision, and 1064nm laser see spot. Imaging performance metrics will be discussed. Demonstrated performance characteristics: Pixel size : 5.6 and 10 um Array size: 720P/1.3Mpix Frame rate: 60 Hz Read noise: 2 ele/pixel Spectral sensitivity: 400 to 1200 nm (with 10x QE at 1064nm) Daytime imaging: color (Bayer pattern) Nighttime imaging: moonless starlight conditions 1064nm laser imaging: daytime imaging out to 2Km
Small SWAP 3D imaging flash ladar for small tactical unmanned air systems
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.
2015-05-01
The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.
An HDR imaging method with DTDI technology for push-broom cameras
NASA Astrophysics Data System (ADS)
Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin
2018-03-01
Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.
Method and apparatus for optical encoding with compressible imaging
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2006-01-01
The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.
Fiber optic spectroscopic digital imaging sensor and method for flame properties monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelepouga, Serguei A; Rue, David M; Saveliev, Alexei V
2011-03-15
A system for real-time monitoring of flame properties in combustors and gasifiers which includes an imaging fiber optic bundle having a light receiving end and a light output end and a spectroscopic imaging system operably connected with the light output end of the imaging fiber optic bundle. Focusing of the light received by the light receiving end of the imaging fiber optic bundle by a wall disposed between the light receiving end of the fiber optic bundle and a light source, which wall forms a pinhole opening aligned with the light receiving end.
FUJIFILM X10 white orbs and DeOrbIt
NASA Astrophysics Data System (ADS)
Dietz, Henry Gordon
2013-01-01
The FUJIFILM X10 is a high-end enthusiast compact digital camera using an unusual sensor design. Unfortunately, upon its Fall 2011 release, the camera quickly became infamous for the uniquely disturbing "white orbs" that often appeared in areas where the sensor was saturated. FUJIFILM's first attempt at a fix was firmware released on February 25, 2012 if it had little effect. In April 2012, a sensor replacement essentially solved the problem. This paper explores the "white orb" phenomenon in detail. After FUJIFILM's attempt at a firmware fix failed, the author decided to create a post-processing tool that automatically could repair existing images. DeOrbIt was released as a free tool on March 7, 2012. To better understand the problem and how to fix it, the WWW form version of the tool logs images, processing parameters, and evaluations by users. The current paper describes the technical problem, the novel computational photography methods used by DeOrbit to repair affected images, and the public perceptions revealed by this experiment.
NASA Technical Reports Server (NTRS)
1975-01-01
A report is presented on a preliminary design of a Synthetic Array Radar (SAR) intended for experimental use with the space shuttle program. The radar is called Earth Resources Shuttle Imaging Radar (ERSIR). Its primary purpose is to determine the usefulness of SAR in monitoring and managing earth resources. The design of the ERSIR, along with tradeoffs made during its evolution is discussed. The ERSIR consists of a flight sensor for collecting the raw radar data and a ground sensor used both for reducing these radar data to images and for extracting earth resources information from the data. The flight sensor consists of two high powered coherent, pulse radars, one that operates at L and the other at X-band. Radar data, recorded on tape can be either transmitted via a digital data link to a ground terminal or the tape can be delivered to the ground station after the shuttle lands. A description of data processing equipment and display devices is given.
Image sensor with high dynamic range linear output
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)
2007-01-01
Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.
NASA Astrophysics Data System (ADS)
Goiffon, Vincent; Rolando, Sébastien; Corbière, Franck; Rizzolo, Serena; Chabane, Aziouz; Girard, Sylvain; Baer, Jérémy; Estribeau, Magali; Magnan, Pierre; Paillet, Philippe; Van Uffelen, Marco; Mont Casellas, Laura; Scott, Robin; Gaillardin, Marc; Marcandella, Claude; Marcelot, Olivier; Allanche, Timothé
2017-01-01
The Total Ionizing Dose (TID) hardness of digital color Camera-on-a-Chip (CoC) building blocks is explored in the Multi-MGy range using 60Co gamma-ray irradiations. The performances of the following CoC subcomponents are studied: radiation hardened (RH) pixel and photodiode designs, RH readout chain, Color Filter Arrays (CFA) and column RH Analog-to-Digital Converters (ADC). Several radiation hardness improvements are reported (on the readout chain and on dark current). CFAs and ADCs degradations appear to be very weak at the maximum TID of 6 MGy(SiO2), 600 Mrad. In the end, this study demonstrates the feasibility of a MGy rad-hard CMOS color digital camera-on-a-chip, illustrated by a color image captured after 6 MGy(SiO2) with no obvious degradation. An original dark current reduction mechanism in irradiated CMOS Image Sensors is also reported and discussed.
Irdis: A Digital Scene Storage And Processing System For Hardware-In-The-Loop Missile Testing
NASA Astrophysics Data System (ADS)
Sedlar, Michael F.; Griffith, Jerry A.
1988-07-01
This paper describes the implementation of a Seeker Evaluation and Test Simulation (SETS) Facility at Eglin Air Force Base. This facility will be used to evaluate imaging infrared (IIR) guided weapon systems by performing various types of laboratory tests. One such test is termed Hardware-in-the-Loop (HIL) simulation (Figure 1) in which the actual flight of a weapon system is simulated as closely as possible in the laboratory. As shown in the figure, there are four major elements in the HIL test environment; the weapon/sensor combination, an aerodynamic simulator, an imagery controller, and an infrared imagery system. The paper concentrates on the approaches and methodologies used in the imagery controller and infrared imaging system elements for generating scene information. For procurement purposes, these two elements have been combined into an Infrared Digital Injection System (IRDIS) which provides scene storage, processing, and output interface to drive a radiometric display device or to directly inject digital video into the weapon system (bypassing the sensor). The paper describes in detail how standard and custom image processing functions have been combined with off-the-shelf mass storage and computing devices to produce a system which provides high sample rates (greater than 90 Hz), a large terrain database, high weapon rates of change, and multiple independent targets. A photo based approach has been used to maximize terrain and target fidelity, thus providing a rich and complex scene for weapon/tracker evaluation.
NASA Astrophysics Data System (ADS)
Konstantinidis, A.; Anaxagoras, T.; Esposito, M.; Allinson, N.; Speller, R.
2012-03-01
X-ray diffraction studies are used to identify specific materials. Several laboratory-based x-ray diffraction studies were made for breast cancer diagnosis. Ideally a large area, low noise, linear and wide dynamic range digital x-ray detector is required to perform x-ray diffraction measurements. Recently, digital detectors based on Complementary Metal-Oxide- Semiconductor (CMOS) Active Pixel Sensor (APS) technology have been used in x-ray diffraction studies. Two APS detectors, namely Vanilla and Large Area Sensor (LAS), were developed by the Multidimensional Integrated Intelligent Imaging (MI-3) consortium to cover a range of scientific applications including x-ray diffraction. The MI-3 Plus consortium developed a novel large area APS, named as Dynamically Adjustable Medical Imaging Technology (DynAMITe), to combine the key characteristics of Vanilla and LAS with a number of extra features. The active area (12.8 × 13.1 cm2) of DynaMITe offers the ability of angle dispersive x-ray diffraction (ADXRD). The current study demonstrates the feasibility of using DynaMITe for breast cancer diagnosis by identifying six breast-equivalent plastics. Further work will be done to optimize the system in order to perform ADXRD for identification of suspicious areas of breast tissue following a conventional mammogram taken with the same sensor.
Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging
Newman, Justin A.; Zhang, Shijie; Sullivan, Shane Z.; ...
2016-05-16
Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence and bright field by laser transmittance were all acquired simultaneously with perfect image registry at up to video-rate (15 frames s –1). A simple change in the incident wavelength enabled simultaneous imaging by two-photon-excited ultraviolet fluorescence, one-photon-excited visible fluorescence and laser transmittance. Development of an analytical model for the signal-to-noise enhancement afforded by synchronous digitization suggests a 15.6-foldmore » improvement over previous photon-counting techniques. This improvement in turn allowed acquisition on nearly an order of magnitude more pixels than the preceding generation of instrumentation and reductions of well over an order of magnitude in image acquisition times. These improvements have allowed detection of protein crystals on the order of 1 µm in thickness under cryogenic conditions in the beamline. Lastly, these capabilities are well suited to support serial crystallography of crystals approaching 1 µm or less in dimension.« less
Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, Justin A.; Zhang, Shijie; Sullivan, Shane Z.
Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence and bright field by laser transmittance were all acquired simultaneously with perfect image registry at up to video-rate (15 frames s –1). A simple change in the incident wavelength enabled simultaneous imaging by two-photon-excited ultraviolet fluorescence, one-photon-excited visible fluorescence and laser transmittance. Development of an analytical model for the signal-to-noise enhancement afforded by synchronous digitization suggests a 15.6-foldmore » improvement over previous photon-counting techniques. This improvement in turn allowed acquisition on nearly an order of magnitude more pixels than the preceding generation of instrumentation and reductions of well over an order of magnitude in image acquisition times. These improvements have allowed detection of protein crystals on the order of 1 µm in thickness under cryogenic conditions in the beamline. Lastly, these capabilities are well suited to support serial crystallography of crystals approaching 1 µm or less in dimension.« less
Guiding synchrotron X-ray diffraction by multimodal video-rate protein crystal imaging
Newman, Justin A.; Zhang, Shijie; Sullivan, Shane Z.; Dow, Ximeng Y.; Becker, Michael; Sheedlo, Michael J.; Stepanov, Sergey; Carlsen, Mark S.; Everly, R. Michael; Das, Chittaranjan; Fischetti, Robert F.; Simpson, Garth J.
2016-01-01
Synchronous digitization, in which an optical sensor is probed synchronously with the firing of an ultrafast laser, was integrated into an optical imaging station for macromolecular crystal positioning prior to synchrotron X-ray diffraction. Using the synchronous digitization instrument, second-harmonic generation, two-photon-excited fluorescence and bright field by laser transmittance were all acquired simultaneously with perfect image registry at up to video-rate (15 frames s−1). A simple change in the incident wavelength enabled simultaneous imaging by two-photon-excited ultraviolet fluorescence, one-photon-excited visible fluorescence and laser transmittance. Development of an analytical model for the signal-to-noise enhancement afforded by synchronous digitization suggests a 15.6-fold improvement over previous photon-counting techniques. This improvement in turn allowed acquisition on nearly an order of magnitude more pixels than the preceding generation of instrumentation and reductions of well over an order of magnitude in image acquisition times. These improvements have allowed detection of protein crystals on the order of 1 µm in thickness under cryogenic conditions in the beamline. These capabilities are well suited to support serial crystallography of crystals approaching 1 µm or less in dimension. PMID:27359145
Multi-sensor fusion over the World Trade Center disaster site
NASA Astrophysics Data System (ADS)
Rodarmel, Craig; Scott, Lawrence; Simerlink, Deborah A.; Walker, Jeffrey
2002-09-01
The immense size and scope of the rescue and clean-up of the World Trade Center site created a need for data that would provide a total overview of the disaster area. To fulfill this need, the New York State Office for Technology (NYSOFT) contracted with EarthData International to collect airborne remote sensing data over Ground Zero with an airborne light detection and ranging (LIDAR) sensor, a high-resolution digital camera, and a thermal camera. The LIDAR data provided a three-dimensional elevation model of the ground surface that was used for volumetric calculations and also in the orthorectification of the digital images. The digital camera provided high-resolution imagery over the site to aide the rescuers in placement of equipment and other assets. In addition, the digital imagery was used to georeference the thermal imagery and also provided the visual background for the thermal data. The thermal camera aided in the location and tracking of underground fires. The combination of data from these three sensors provided the emergency crews with a timely, accurate overview containing a wealth of information of the rapidly changing disaster site. Because of the dynamic nature of the site, the data was acquired on a daily basis, processed, and turned over to NYSOFT within twelve hours of the collection. During processing, the three datasets were combined and georeferenced to allow them to be inserted into the client's geographic information systems.
NASA Astrophysics Data System (ADS)
Lukes, George E.; Cain, Joel M.
1996-02-01
The Advanced Distributed Simulation (ADS) Synthetic Environments Program seeks to create robust virtual worlds from operational terrain and environmental data sources of sufficient fidelity and currency to interact with the real world. While some applications can be met by direct exploitation of standard digital terrain data, more demanding applications -- particularly those support operations 'close to the ground' -- are well-served by emerging capabilities for 'value-adding' by the user working with controlled imagery. For users to rigorously refine and exploit controlled imagery within functionally different workstations they must have a shared framework to allow interoperability within and between these environments in terms of passing image and object coordinates and other information using a variety of validated sensor models. The Synthetic Environments Program is now being expanded to address rapid construction of virtual worlds with research initiatives in digital mapping, softcopy workstations, and cartographic image understanding. The Synthetic Environments Program is also participating in a joint initiative for a sensor model applications programer's interface (API) to ensure that a common controlled imagery exploitation framework is available to all researchers, developers and users. This presentation provides an introduction to ADS and the associated requirements for synthetic environments to support synthetic theaters of war. It provides a technical rationale for exploring applications of image understanding technology to automated cartography in support of ADS and related programs benefitting from automated analysis of mapping, earth resources and reconnaissance imagery. And it provides an overview and status of the joint initiative for a sensor model API.
Nobukawa, Teruyoshi; Nomura, Takanori
2016-09-05
A holographic data storage system using digital holography is proposed to record and retrieve multilevel complex amplitude data pages. Digital holographic techniques are capable of modulating and detecting complex amplitude distribution using current electronic devices. These techniques allow the development of a simple, compact, and stable holographic storage system that mainly consists of a single phase-only spatial light modulator and an image sensor. As a proof-of-principle experiment, complex amplitude data pages with binary amplitude and four-level phase are recorded and retrieved. Experimental results show the feasibility of the proposed holographic data storage system.
Chiang, Kai-Wei; Chang, Hsiu-Wen; Li, Chia-Yuan; Huang, Yun-Wen
2009-01-01
Digital mobile mapping, which integrates digital imaging with direct geo-referencing, has developed rapidly over the past fifteen years. Direct geo-referencing is the determination of the time-variable position and orientation parameters for a mobile digital imager. The most common technologies used for this purpose today are satellite positioning using Global Positioning System (GPS) and Inertial Navigation System (INS) using an Inertial Measurement Unit (IMU). They are usually integrated in such a way that the GPS receiver is the main position sensor, while the IMU is the main orientation sensor. The Kalman Filter (KF) is considered as the optimal estimation tool for real-time INS/GPS integrated kinematic position and orientation determination. An intelligent hybrid scheme consisting of an Artificial Neural Network (ANN) and KF has been proposed to overcome the limitations of KF and to improve the performance of the INS/GPS integrated system in previous studies. However, the accuracy requirements of general mobile mapping applications can’t be achieved easily, even by the use of the ANN-KF scheme. Therefore, this study proposes an intelligent position and orientation determination scheme that embeds ANN with conventional Rauch-Tung-Striebel (RTS) smoother to improve the overall accuracy of a MEMS INS/GPS integrated system in post-mission mode. By combining the Micro Electro Mechanical Systems (MEMS) INS/GPS integrated system and the intelligent ANN-RTS smoother scheme proposed in this study, a cheaper but still reasonably accurate position and orientation determination scheme can be anticipated. PMID:22574034
Volumetric Forest Change Detection Through Vhr Satellite Imagery
NASA Astrophysics Data System (ADS)
Akca, Devrim; Stylianidis, Efstratios; Smagas, Konstantinos; Hofer, Martin; Poli, Daniela; Gruen, Armin; Sanchez Martin, Victor; Altan, Orhan; Walli, Andreas; Jimeno, Elisa; Garcia, Alejandro
2016-06-01
Quick and economical ways of detecting of planimetric and volumetric changes of forest areas are in high demand. A research platform, called FORSAT (A satellite processing platform for high resolution forest assessment), was developed for the extraction of 3D geometric information from VHR (very-high resolution) imagery from satellite optical sensors and automatic change detection. This 3D forest information solution was developed during a Eurostars project. FORSAT includes two main units. The first one is dedicated to the geometric and radiometric processing of satellite optical imagery and 2D/3D information extraction. This includes: image radiometric pre-processing, image and ground point measurement, improvement of geometric sensor orientation, quasiepipolar image generation for stereo measurements, digital surface model (DSM) extraction by using a precise and robust image matching approach specially designed for VHR satellite imagery, generation of orthoimages, and 3D measurements in single images using mono-plotting and in stereo images as well as triplets. FORSAT supports most of the VHR optically imagery commonly used for civil applications: IKONOS, OrbView - 3, SPOT - 5 HRS, SPOT - 5 HRG, QuickBird, GeoEye-1, WorldView-1/2, Pléiades 1A/1B, SPOT 6/7, and sensors of similar type to be expected in the future. The second unit of FORSAT is dedicated to 3D surface comparison for change detection. It allows users to import digital elevation models (DEMs), align them using an advanced 3D surface matching approach and calculate the 3D differences and volume changes between epochs. To this end our 3D surface matching method LS3D is being used. FORSAT is a single source and flexible forest information solution with a very competitive price/quality ratio, allowing expert and non-expert remote sensing users to monitor forests in three and four dimensions from VHR optical imagery for many forest information needs. The capacity and benefits of FORSAT have been tested in six case studies located in Austria, Cyprus, Spain, Switzerland and Turkey, using optical data from different sensors and with the purpose to monitor forest with different geometric characteristics. The validation run on Cyprus dataset is reported and commented.
Radiometric and Spatial Characterization of High-Spatial Resolution Sensors
NASA Technical Reports Server (NTRS)
Thome, Kurtis; Zanoni, Vicki (Technical Monitor)
2002-01-01
The development and improvement of commercial hyperspatial sensors in recent years has increased the breadth of information that can be retrieved from spaceborne and airborne imagery. NASA, through it's Scientific Data Purchases, has successfully provided such data sets to its user community. A key element to the usefulness of these data are an understanding of the radiometric and spatial response quality of the imagery. This proposal seeks funding to examine the absolute radiometric calibration of the Ikonos sensor operated by Space Imaging and the recently-launched Quickbird sensor from DigitalGlobe. In addition, we propose to evaluate the spatial response of the two sensors. The proposed methods rely on well-understood, ground-based targets that have been used by the University of Arizona for more than a decade.
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
A New Digital Imaging and Analysis System for Plant and Ecosystem Phenological Studies
NASA Astrophysics Data System (ADS)
Ramirez, G.; Ramirez, G. A.; Vargas, S. A., Jr.; Luna, N. R.; Tweedie, C. E.
2015-12-01
Over the past decade, environmental scientists have increasingly used low-cost sensors and custom software to gather and analyze environmental data. Included in this trend has been the use of imagery from field-mounted static digital cameras. Published literature has highlighted the challenge scientists have encountered with poor and problematic camera performance and power consumption, limited data download and wireless communication options, general ruggedness of off the shelf camera solutions, and time consuming and hard-to-reproduce digital image analysis options. Data loggers and sensors are typically limited to data storage in situ (requiring manual downloading) and/or expensive data streaming options. Here we highlight the features and functionality of a newly invented camera/data logger system and coupled image analysis software suited to plant and ecosystem phenological studies (patent pending). The camera has resulted from several years of development and prototype testing supported by several grants funded by the US NSF. These inventions have several unique features and functionality and have been field tested in desert, arctic, and tropical rainforest ecosystems. The system can be used to acquire imagery/data from static and mobile platforms. Data is collected, preprocessed, and streamed to the cloud without the need of an external computer and can run for extended time periods. The camera module is capable of acquiring RGB, IR, and thermal (LWIR) data and storing it in a variety of formats including RAW. The system is full customizable with a wide variety of passive and smart sensors. The camera can be triggered by state conditions detected by sensors and/or select time intervals. The device includes USB, Wi-Fi, Bluetooth, serial, GSM, Ethernet, and Iridium connections and can be connected to commercial cloud servers such as Dropbox. The complementary image analysis software is compatible with all popular operating systems. Imagery can be viewed and analyzed in RGB, HSV, and l*a*b color space. Users can select a spectral index, which have been derived from published literature and/or choose to have analytical output reported as separate channel strengths for a given color space. Results of the analysis can be viewed in a plot and/or saved as a .csv file for additional analysis and visualization.
Textured digital elevation model formation from low-cost UAV LADAR/digital image data
NASA Astrophysics Data System (ADS)
Bybee, Taylor C.; Budge, Scott E.
2015-05-01
Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.
Thermal infrared reference sources fabricated from low-cost components and materials
NASA Astrophysics Data System (ADS)
Hovland, Harald; Skauli, Torbjørn
2018-04-01
Mass markets, including mobile phones and automotive sensors, drive rapid developments of imaging technologies toward high performance, low cost sensors, even for the thermal infrared. Good infrared calibration blackbody sources have remained relatively costly, however. Here we demonstrate how to make low-cost reference sources, making quantitative infrared radiometry more accessible to a wider community. Our approach uses ordinary construction materials combined with low cost microcontrollers, digital temperature sensors and foil heater elements from massmarket 3D printers. Blackbodies are constructed from a foil heater of some chosen size and shape, attached to the back of a similarly shaped aluminum plate coated with commercial black paint, which normally exhibits high emissivity. The emissivity can be readily checked by using a thermal imager to view the reflection of a hot object. A digital temperature sensor is attached to the back of the plate. Thermal isolation of the backside minimizes temperature gradients through the plate, ensuring correct readings of the front temperature. The isolation also serves to minimize convection gradients and keeps power consumption low, which is useful for battery powered operation in the field. We demonstrate surface blackbodies (200x200 mm2) with surface homogeneities as low as 0.1°C at 100°C. Homogeneous heating and low thermal mass provides for fast settling time and setup/pack-down time. The approach is scalable to larger sizes by tiling, enabling portable and foldable square-meter-size or larger devices.
The Design of Optical Sensor for the Pinhole/Occulter Facility
NASA Technical Reports Server (NTRS)
Greene, Michael E.
1990-01-01
Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.
Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.
Kim, Dong Sik
2016-08-01
The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .
Rahman, Nur Liyana Abdul; Asri, Amiza Aqiela Ahmad; Othman, Noor Ilyani; Wan Mokhtar, Ilham
2017-01-01
Purpose This study was performed to quantify the repeat rate of imaging acquisitions based on different clinical examinations, and to assess the prevalence of error types in intraoral bitewing and periapical imaging using a digital complementary metal-oxide-semiconductor (CMOS) intraoral sensor. Materials and Methods A total of 8,030 intraoral images were retrospectively collected from 3 groups of undergraduate clinical dental students. The type of examination, stage of the procedure, and reasons for repetition were analysed and recorded. The repeat rate was calculated as the total number of repeated images divided by the total number of examinations. The weighted Cohen's kappa for inter- and intra-observer agreement was used after calibration and prior to image analysis. Results The overall repeat rate on intraoral periapical images was 34.4%. A total of 1,978 repeated periapical images were from endodontic assessment, which included working length estimation (WLE), trial gutta-percha (tGP), obturation, and removal of gutta-percha (rGP). In the endodontic imaging, the highest repeat rate was from WLE (51.9%) followed by tGP (48.5%), obturation (42.2%), and rGP (35.6%). In bitewing images, the repeat rate was 15.1% and poor angulation was identified as the most common cause of error. A substantial level of intra- and interobserver agreement was achieved. Conclusion The repeat rates in this study were relatively high, especially for certain clinical procedures, warranting training in optimization techniques and radiation protection. Repeat analysis should be performed from time to time to enhance quality assurance and hence deliver high-quality health services to patients. PMID:29279822
It's not the pixel count, you fool
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2012-01-01
The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
The digital compensation technology system for automotive pressure sensor
NASA Astrophysics Data System (ADS)
Guo, Bin; Li, Quanling; Lu, Yi; Luo, Zai
2011-05-01
Piezoresistive pressure sensor be made of semiconductor silicon based on Piezoresistive phenomenon, has many characteristics. But since the temperature effect of semiconductor, the performance of silicon sensor is also changed by temperature, and the pressure sensor without temperature drift can not be produced at present. This paper briefly describe the principles of sensors, the function of pressure sensor and the various types of compensation method, design the detailed digital compensation program for automotive pressure sensor. Simulation-Digital mixed signal conditioning is used in this dissertation, adopt signal conditioning chip MAX1452. AVR singlechip ATMEGA128 and other apparatus; fulfill the design of digital pressure sensor hardware circuit and singlechip hardware circuit; simultaneously design the singlechip software; Digital pressure sensor hardware circuit is used to implementing the correction and compensation of sensor; singlechip hardware circuit is used to implementing to controll the correction and compensation of pressure sensor; singlechip software is used to implementing to fulfill compensation arithmetic. In the end, it implement to measure the output of sensor, and contrast to the data of non-compensation, the outcome indicates that the compensation precision of compensated sensor output is obviously better than non-compensation sensor, not only improving the compensation precision but also increasing the stabilization of pressure sensor.
Maestre-Rendon, J. Rodolfo; Sierra-Hernandez, Juan M.; Contreras-Medina, Luis M.; Fernandez-Jaramillo, Arturo A.
2017-01-01
Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920) connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS) has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat. PMID:29165397
Maestre-Rendon, J Rodolfo; Rivera-Roman, Tomas A; Sierra-Hernandez, Juan M; Cruz-Aceves, Ivan; Contreras-Medina, Luis M; Duarte-Galvan, Carlos; Fernandez-Jaramillo, Arturo A
2017-11-22
Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920) connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS) has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.
Low cost, multiscale and multi-sensor application for flooded area mapping
NASA Astrophysics Data System (ADS)
Giordan, Daniele; Notti, Davide; Villa, Alfredo; Zucca, Francesco; Calò, Fabiana; Pepe, Antonio; Dutto, Furio; Pari, Paolo; Baldo, Marco; Allasia, Paolo
2018-05-01
Flood mapping and estimation of the maximum water depth are essential elements for the first damage evaluation, civil protection intervention planning and detection of areas where remediation is needed. In this work, we present and discuss a methodology for mapping and quantifying flood severity over floodplains. The proposed methodology considers a multiscale and multi-sensor approach using free or low-cost data and sensors. We applied this method to the November 2016 Piedmont (northwestern Italy) flood. We first mapped the flooded areas at the basin scale using free satellite data from low- to medium-high-resolution from both the SAR (Sentinel-1, COSMO-Skymed) and multispectral sensors (MODIS, Sentinel-2). Using very- and ultra-high-resolution images from the low-cost aerial platform and remotely piloted aerial system, we refined the flooded zone and detected the most damaged sector. The presented method considers both urbanised and non-urbanised areas. Nadiral images have several limitations, in particular in urbanised areas, where the use of terrestrial images solved this limitation. Very- and ultra-high-resolution images were processed with structure from motion (SfM) for the realisation of 3-D models. These data, combined with an available digital terrain model, allowed us to obtain maps of the flooded area, maximum high water area and damaged infrastructures.
ERIC Educational Resources Information Center
Vogt, Frank
2011-01-01
Most measurement techniques have some limitations imposed by a sensor's signal-to-noise ratio (SNR). Thus, in analytical chemistry, methods for enhancing the SNR are of crucial importance and can be ensured experimentally or established via pre-treatment of digitized data. In many analytical curricula, instrumental techniques are given preference…
Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro
2011-01-01
The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu’s method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production. PMID:22163940
Macedo-Cruz, Antonia; Pajares, Gonzalo; Santos, Matilde; Villegas-Romero, Isidro
2011-01-01
The aim of this paper is to classify the land covered with oat crops, and the quantification of frost damage on oats, while plants are still in the flowering stage. The images are taken by a digital colour camera CCD-based sensor. Unsupervised classification methods are applied because the plants present different spectral signatures, depending on two main factors: illumination and the affected state. The colour space used in this application is CIELab, based on the decomposition of the colour in three channels, because it is the closest to human colour perception. The histogram of each channel is successively split into regions by thresholding. The best threshold to be applied is automatically obtained as a combination of three thresholding strategies: (a) Otsu's method, (b) Isodata algorithm, and (c) Fuzzy thresholding. The fusion of these automatic thresholding techniques and the design of the classification strategy are some of the main findings of the paper, which allows an estimation of the damages and a prediction of the oat production.
Colorimetric Recognition of Aldehydes and Ketones.
Li, Zheng; Fang, Ming; LaGasse, Maria K; Askim, Jon R; Suslick, Kenneth S
2017-08-07
A colorimetric sensor array has been designed for the identification of and discrimination among aldehydes and ketones in vapor phase. Due to rapid chemical reactions between the solid-state sensor elements and gaseous analytes, distinct color difference patterns were produced and digitally imaged for chemometric analysis. The sensor array was developed from classical spot tests using aniline and phenylhydrazine dyes that enable molecular recognition of a wide variety of aliphatic or aromatic aldehydes and ketones, as demonstrated by hierarchical cluster, principal component, and support vector machine analyses. The aldehyde/ketone-specific sensors were further employed for differentiation among and identification of ten liquor samples (whiskies, brandy, vodka) and ethanol controls, showing its potential applications in the beverage industry. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Origami silicon optoelectronics for hemispherical electronic eye systems.
Zhang, Kan; Jung, Yei Hwan; Mikael, Solomon; Seo, Jung-Hun; Kim, Munho; Mi, Hongyi; Zhou, Han; Xia, Zhenyang; Zhou, Weidong; Gong, Shaoqin; Ma, Zhenqiang
2017-11-24
Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices.
SENSOR: a tool for the simulation of hyperspectral remote sensing systems
NASA Astrophysics Data System (ADS)
Börner, Anko; Wiest, Lorenz; Keller, Peter; Reulke, Ralf; Richter, Rolf; Schaepman, Michael; Schläpfer, Daniel
The consistent end-to-end simulation of airborne and spaceborne earth remote sensing systems is an important task, and sometimes the only way for the adaptation and optimisation of a sensor and its observation conditions, the choice and test of algorithms for data processing, error estimation and the evaluation of the capabilities of the whole sensor system. The presented software simulator SENSOR (Software Environment for the Simulation of Optical Remote sensing systems) includes a full model of the sensor hardware, the observed scene, and the atmosphere in between. The simulator consists of three parts. The first part describes the geometrical relations between scene, sun, and the remote sensing system using a ray-tracing algorithm. The second part of the simulation environment considers the radiometry. It calculates the at-sensor radiance using a pre-calculated multidimensional lookup-table taking the atmospheric influence on the radiation into account. The third part consists of an optical and an electronic sensor model for the generation of digital images. Using SENSOR for an optimisation requires the additional application of task-specific data processing algorithms. The principle of the end-to-end-simulation approach is explained, all relevant concepts of SENSOR are discussed, and first examples of its use are given. The verification of SENSOR is demonstrated. This work is closely related to the Airborne PRISM Experiment (APEX), an airborne imaging spectrometer funded by the European Space Agency.
Automated inspection of hot steel slabs
Martin, R.J.
1985-12-24
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.
Automated inspection of hot steel slabs
Martin, Ronald J.
1985-01-01
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.
Roughness based perceptual analysis towards digital skin imaging system with haptic feedback.
Kim, K
2016-08-01
To examine psoriasis or atopic eczema, analyzing skin roughness by palpation is essential to precisely diagnose skin diseases. However, optical sensor based skin imaging systems do not allow dermatologists to touch skin images. To solve the problem, a new haptic rendering technology that can accurately display skin roughness must be developed. In addition, the rendering algorithm must be able to filter spatial noises created during 2D to 3D image conversion without losing the original roughness on the skin image. In this study, a perceptual way to design a noise filter that will remove spatial noises and in the meantime recover maximized roughness is introduced by understanding human sensitivity on surface roughness. A visuohaptic rendering system that can provide a user with seeing and touching digital skin surface roughness has been developed including a geometric roughness estimation method from a meshed surface. In following, a psychophysical experiment was designed and conducted with 12 human subjects to measure human perception with the developed visual and haptic interfaces to examine surface roughness. From the psychophysical experiment, it was found that touch is more sensitive at lower surface roughness, and vice versa. Human perception with both senses, vision and touch, becomes less sensitive to surface distortions as roughness increases. When interact with both channels, visual and haptic interfaces, the performance to detect abnormalities on roughness is greatly improved by sensory integration with the developed visuohaptic rendering system. The result can be used as a guideline to design a noise filter that can perceptually remove spatial noises while recover maximized roughness values from a digital skin image obtained by optical sensors. In addition, the result also confirms that the developed visuohaptic rendering system can help dermatologists or skin care professionals examine skin conditions by using vision and touch at the same time. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Kangas, Michael J; Burks, Raychelle M; Atwater, Jordyn; Lukowicz, Rachel M; Garver, Billy; Holmes, Andrea E
2018-02-01
With the increasing availability of digital imaging devices, colorimetric sensor arrays are rapidly becoming a simple, yet effective tool for the identification and quantification of various analytes. Colorimetric arrays utilize colorimetric data from many colorimetric sensors, with the multidimensional nature of the resulting data necessitating the use of chemometric analysis. Herein, an 8 sensor colorimetric array was used to analyze select acid and basic samples (0.5 - 10 M) to determine which chemometric methods are best suited for classification quantification of analytes within clusters. PCA, HCA, and LDA were used to visualize the data set. All three methods showed well-separated clusters for each of the acid or base analytes and moderate separation between analyte concentrations, indicating that the sensor array can be used to identify and quantify samples. Furthermore, PCA could be used to determine which sensors showed the most effective analyte identification. LDA, KNN, and HQI were used for identification of analyte and concentration. HQI and KNN could be used to correctly identify the analytes in all cases, while LDA correctly identified 95 of 96 analytes correctly. Additional studies demonstrated that controlling for solvent and image effects was unnecessary for all chemometric methods utilized in this study.
Autonomous chemical and biological miniature wireless-sensor
NASA Astrophysics Data System (ADS)
Goldberg, Bar-Giora
2005-05-01
The presentation discusses a new concept and a paradigm shift in biological, chemical and explosive sensor system design and deployment. From large, heavy, centralized and expensive systems to distributed wireless sensor networks utilizing miniature platforms (nodes) that are lightweight, low cost and wirelessly connected. These new systems are possible due to the emergence and convergence of new innovative radio, imaging, networking and sensor technologies. Miniature integrated radio-sensor networks, is a technology whose time has come. These network systems are based on large numbers of distributed low cost and short-range wireless platforms that sense and process their environment and communicate data thru a network to a command center. The recent emergence of chemical and explosive sensor technology based on silicon nanostructures, coupled with the fast evolution of low-cost CMOS imagers, low power DSP engines and integrated radio chips, has created an opportunity to realize the vision of autonomous wireless networks. These threat detection networks will perform sophisticated analysis at the sensor node and convey alarm information up the command chain. Sensor networks of this type are expected to revolutionize the ability to detect and locate biological, chemical, or explosive threats. The ability to distribute large numbers of low-cost sensors over large areas enables these devices to be close to the targeted threats and therefore improve detection efficiencies and enable rapid counter responses. These sensor networks will be used for homeland security, shipping container monitoring, and other applications such as laboratory medical analysis, drug discovery, automotive, environmental and/or in-vivo monitoring. Avaak"s system concept is to image a chromatic biological, chemical and/or explosive sensor utilizing a digital imager, analyze the images and distribute alarm or image data wirelessly through the network. All the imaging, processing and communications would take place within the miniature, low cost distributed sensor platforms. This concept however presents a significant challenge due to a combination and convergence of required new technologies, as mentioned above. Passive biological and chemical sensors with very high sensitivity and which require no assaying are in development using a technique to optically and chemically encode silicon wafers with tailored nanostructures. The silicon wafer is patterned with nano-structures designed to change colors ad patterns when exposed to the target analytes (TICs, TIMs, VOC). A small video camera detects the color and pattern changes on the sensor. To determine if an alarm condition is present, an on board DSP processor, using specialized image processing algorithms and statistical analysis, determines if color gradient changes occurred on the sensor array. These sensors can detect several agents simultaneously. This system is currently under development by Avaak, with funding from DARPA through an SBIR grant.
Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J
2017-03-07
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
NASA Astrophysics Data System (ADS)
Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.
2017-03-01
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.
2006-01-01
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Manipulating Digital Holograms to Modify Phase of Reconstructed Wavefronts
NASA Astrophysics Data System (ADS)
Ferraro, Pietro; Paturzo, Melania; Memmolo, Pasquale; Finizio, Andrea
2010-04-01
We show that through an adaptive deformation of digital holograms it is possible to manage the depth of focus in the numerical reconstruction. Deformation is applied to the original hologram with the aim to put simultaneously in-focus, and in one reconstructed image plane, different objects lying at different distance from the hologram plane (i.e. CCD sensor), but in the same field of view. In the same way it is possible to extend the depth of field for 3D object having a tilted object whole in-focus.
Urban area change detection procedures with remote sensing data
NASA Technical Reports Server (NTRS)
Maxwell, E. L. (Principal Investigator); Riordan, C. J.
1980-01-01
The underlying factors affecting the detection and identification of nonurban to urban land cover change using satellite data were studied. Computer programs were developed to create a digital scene and to simulate the effect of the sensor point spread function (PSF) on the transfer of modulation from the scene to an image of the scene. The theory behind the development of a digital filter representing the PSF is given as well as an example of its application. Atmospheric effects on modulation transfer are also discussed. A user's guide and program listings are given.
Pc-based car license plate reading
NASA Astrophysics Data System (ADS)
Tanabe, Katsuyoshi; Marubayashi, Eisaku; Kawashima, Harumi; Nakanishi, Tadashi; Shio, Akio
1994-03-01
A PC-based car license plate recognition system has been developed. The system recognizes Chinese characters and Japanese phonetic hiragana characters as well as six digits on Japanese license plates. The system consists of a CCD camera, vehicle sensors, a strobe unit, a monitoring center, and an i486-based PC. The PC includes in its extension slots: a vehicle detector board, a strobe emitter board, and an image grabber board. When a passing vehicle is detected by the vehicle sensors, the strobe emits a pulse of light. The light pulse is synchronized with the time the vehicle image is frozen on an image grabber board. The recognition process is composed of three steps: image thresholding, character region extraction, and matching-based character recognition. The recognition software can handle obscured characters. Experimental results for hundreds of outdoor images showed high recognition performance within relatively short performance times. The results confirmed that the system is applicable to a wide variety of applications such as automatic vehicle identification and travel time measurement.
Imaging the atmosphere using volcanic infrasound recorded on a dense local sensor network
NASA Astrophysics Data System (ADS)
Marcillo, O. E.; Johnson, J. B.; Johnson, R.
2010-12-01
We deployed a 47-node infrasound sensor network around Kilauea’s Halemaumau Vent to image the atmospheric conditions of the near-surface. This active vent is a persistent radiator of energetic infrasound enabling us to probe atmospheric winds and temperatures. This research builds upon a previous experiment that recorded infrasound on a three-node network, to determine relative phase delay and invert for atmospheric wind. The technique developed for this previous analysis assumed the intrinsic sound speed and was able to track the evolution of the average wind field in a large area (around 10 km2) and was largely insensitive to local meteorological effects, caused by topography and vegetation. The results of this previous experiment showed the potential of this technique for atmospheric studies and called for a following experiment with a denser sensor network over a larger area. During the summer 2010, we returned to Kilauea and deployed a 47-sensor network in three different configurations around Kilauea summit and down the volcano’s flanks. Persistent infrasonic tremor was ‘loud’ with excess pressures up to 10 Pa (when scaled to 1 km) and periods of high acoustic emissions that lasted from hours to days. The instrumentation for this experiment was composed of single-channel RefTek RT125A Texan digitizers and InfraNMT infrasound sensors. The Texan digitizers provide high-resolution 24-bit analog to digital conversion and can operate continuously for approximately five days with two D-cell batteries. The InfraNMT sensor is based on a piezo-electric transducer and was developed at the Infrasound Laboratory at New Mexico Tech. This sensor features low power (< 3 mA at 9 V) and flat response between 0.02 to 50 Hz. Three different network topologies were tested during this two-week experiment. For the first and second topologies, the sensors were deployed along established roads on two almost perpendicular sensor lines centered at the Halema’uma’u crater. The furthest sensors were located at ~24 km and ~10 km from the vent respectively. Numerical analysis indicates that these two configurations will be able to probe the atmospheric conditions up to 2 km above the ground. The third topology featured most of the sensors on the summit crater at similar radial distances (2-4 km) and different azimuths. The data collected with the third topology is expected to provide detailed information of the very-local infrasonic field. Each configuration was on the ground and operational for around 84 hours. This full dataset will provide an opportunity to investigate source phenomenology and/or propagation effects of the infrasonic field. Tomographic studies of the atmosphere are expected to provide meteorological data that will be of value for ash and gas propagation models.
Retinal fundus imaging with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos
2018-02-01
Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakubek, J.; Cejnarova, A.; Platkevic, M.
Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less
Field-portable lensfree tomographic microscope.
Isikman, Serhan O; Bishara, Waheb; Sikora, Uzair; Yaglidere, Oguzhan; Yeah, John; Ozcan, Aydogan
2011-07-07
We present a field-portable lensfree tomographic microscope, which can achieve sectional imaging of a large volume (∼20 mm(3)) on a chip with an axial resolution of <7 μm. In this compact tomographic imaging platform (weighing only ∼110 grams), 24 light-emitting diodes (LEDs) that are each butt-coupled to a fibre-optic waveguide are controlled through a cost-effective micro-processor to sequentially illuminate the sample from different angles to record lensfree holograms of the sample that is placed on the top of a digital sensor array. In order to generate pixel super-resolved (SR) lensfree holograms and hence digitally improve the achievable lateral resolution, multiple sub-pixel shifted holograms are recorded at each illumination angle by electromagnetically actuating the fibre-optic waveguides using compact coils and magnets. These SR projection holograms obtained over an angular range of ±50° are rapidly reconstructed to yield projection images of the sample, which can then be back-projected to compute tomograms of the objects on the sensor-chip. The performance of this compact and light-weight lensfree tomographic microscope is validated by imaging micro-beads of different dimensions as well as a Hymenolepis nana egg, which is an infectious parasitic flatworm. Achieving a decent three-dimensional spatial resolution, this field-portable on-chip optical tomographic microscope might provide a useful toolset for telemedicine and high-throughput imaging applications in resource-poor settings. This journal is © The Royal Society of Chemistry 2011
Cameras for digital microscopy.
Spring, Kenneth R
2013-01-01
This chapter reviews the fundamental characteristics of charge-coupled devices (CCDs) and related detectors, outlines the relevant parameters for their use in microscopy, and considers promising recent developments in the technology of detectors. Electronic imaging with a CCD involves three stages--interaction of a photon with the photosensitive surface, storage of the liberated charge, and readout or measurement of the stored charge. The most demanding applications in fluorescence microscopy may require as much as four orders of greater magnitude sensitivity. The image in the present-day light microscope is usually acquired with a CCD camera. The CCD is composed of a large matrix of photosensitive elements (often referred to as "pixels" shorthand for picture elements, which simultaneously capture an image over the entire detector surface. The light-intensity information for each pixel is stored as electronic charge and is converted to an analog voltage by a readout amplifier. This analog voltage is subsequently converted to a numerical value by a digitizer situated on the CCD chip, or very close to it. Several (three to six) amplifiers are required for each pixel, and to date, uniform images with a homogeneous background have been a problem because of the inherent difficulties of balancing the gain in all of the amplifiers. Complementary metal oxide semiconductor sensors also exhibit relatively high noise associated with the requisite high-speed switching. Both of these deficiencies are being addressed, and sensor performance is nearing that required for scientific imaging. Copyright © 1998 Elsevier Inc. All rights reserved.
Applications of TIMS data in agricultural areas and related atmospheric considerations
NASA Technical Reports Server (NTRS)
Pelletier, R. E.; Ochoa, M. C.
1986-01-01
While much of traditional remote sensing in agricultural research was limited to the visible and reflective infrared, advances in thermal infrared remote sensing technology are adding a dimension to digital image analysis of agricultural areas. The Thermal Infrared Multispectral Scanner (TIMS) an airborne sensor having six bands over the nominal 8.2 to 12.2 m range, offers the ability to calculate land surface emissivities unlike most previous singular broadband sensors. Preliminary findings on the utility of the TIMS for several agricultural applications and related atmospheric considerations are discussed.
Toward Optical Sensors: Review and Applications
NASA Astrophysics Data System (ADS)
Sabri, Naseer; Aljunid, S. A.; Salim, M. S.; Ahmad, R. B.; Kamaruddin, R.
2013-04-01
Recent advances in fiber optics (FOs) and the numerous advantages of light over electronic systems have boosted the utility and demand for optical sensors in various military, industry and social fields. Environmental and atmospheric monitoring, earth and space sciences, industrial chemical processing and biotechnology, law enforcement, digital imaging, scanning, and printing are exemplars of them. The ubiquity of photonic technologies could drive down prices which reduced the cost of optical fibers and lasers. Fiber optic sensors (FOSs) offer a wide spectrum of advantages over traditional sensing systems, such as small size and longer lifetime. Immunity to electromagnetic interference, amenability to multiplexing, and high sensitivity make FOs the sensor technology of choice in several fields, including the healthcare and aerospace sectors. FOSs show reliable and rigid sensing tasks over conventional electrical and electronic sensors. This paper presents an executive review of optical fiber sensors and the most beneficial applications.
Undersampled digital holographic interferometry
NASA Astrophysics Data System (ADS)
Halaq, H.; Demoli, N.; Sović, I.; Šariri, K.; Torzynski, M.; Vukičević, D.
2008-04-01
In digital holography, primary holographic fringes are recorded using a matricial CCD sensor. Because of the low spatial resolution of currently available CCD arrays, the angle between the reference and object beams must be limited to a few degrees. Namely, due to the digitization involved, the Shannon's criterion imposes that the Nyquist sampling frequency be at least twice the highest signal frequency. This means that, in the case of the recording of an interference fringe pattern by a CCD sensor, the inter-fringe distance must be larger than twice the pixel period. This in turn limits the angle between the object and the reference beams. If this angle, in a practical holographic interferometry measuring setup, cannot be limited to the required value, aliasing will occur in the reconstructed image. In this work, we demonstrate that the low spatial frequency metrology data could nevertheless be efficiently extracted by careful choice of twofold, and even threefold, undersampling of the object field. By combining the time-averaged recording with subtraction digital holography method, we present results for a loudspeaker membrane interferometric study obtained under strong aliasing conditions. High-contrast fringes, as a consequence of the vibration modes of the membrane, are obtained.
Color filter array pattern identification using variance of color difference image
NASA Astrophysics Data System (ADS)
Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu
2017-07-01
A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.
Design of an Intelligent Front-End Signal Conditioning Circuit for IR Sensors
NASA Astrophysics Data System (ADS)
de Arcas, G.; Ruiz, M.; Lopez, J. M.; Gutierrez, R.; Villamayor, V.; Gomez, L.; Montojo, Mª. T.
2008-02-01
This paper presents the design of an intelligent front-end signal conditioning system for IR sensors. The system has been developed as an interface between a PbSe IR sensor matrix and a TMS320C67x digital signal processor. The system architecture ensures its scalability so it can be used for sensors with different matrix sizes. It includes an integrator based signal conditioning circuit, a data acquisition converter block, and a FPGA based advanced control block that permits including high level image preprocessing routines such as faulty pixel detection and sensor calibration in the signal conditioning front-end. During the design phase virtual instrumentation technologies proved to be a very valuable tool for prototyping when choosing the best A/D converter type for the application. Development time was significantly reduced due to the use of this technology.
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Various papers on control paradigms and data structures in sensor fusion are presented. The general topics addressed include: decision models and computational methods, sensor modeling and data representation, active sensing strategies, geometric planning and visualization, task-driven sensing, motion analysis, models motivated biology and psychology, decentralized detection and distributed decision, data fusion architectures, robust estimation of shapes and features, application and implementation. Some of the individual subjects considered are: the Firefly experiment on neural networks for distributed sensor data fusion, manifold traversing as a model for learning control of autonomous robots, choice of coordinate systems for multiple sensor fusion, continuous motion using task-directed stereo vision, interactive and cooperative sensing and control for advanced teleoperation, knowledge-based imaging for terrain analysis, physical and digital simulations for IVA robotics.
Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation
NASA Astrophysics Data System (ADS)
Nakata, Robert
Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.
NASA Astrophysics Data System (ADS)
Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang
The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.
Method for auto-alignment of digital optical phase conjugation systems based on digital propagation
Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei
2014-01-01
Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5∘, and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems. PMID:24977504
Method for auto-alignment of digital optical phase conjugation systems based on digital propagation.
Jang, Mooseok; Ruan, Haowen; Zhou, Haojiang; Judkewitz, Benjamin; Yang, Changhuei
2014-06-16
Optical phase conjugation (OPC) has enabled many optical applications such as aberration correction and image transmission through fiber. In recent years, implementation of digital optical phase conjugation (DOPC) has opened up the possibility of its use in biomedical optics (e.g. deep-tissue optical focusing) due to its ability to provide greater-than-unity OPC reflectivity (the power ratio of the phase conjugated beam and input beam to the OPC system) and its flexibility to accommodate additional wavefront manipulations. However, the requirement for precise (pixel-to-pixel matching) alignment of the wavefront sensor and the spatial light modulator (SLM) limits the practical usability of DOPC systems. Here, we report a method for auto-alignment of a DOPC system by which the misalignment between the sensor and the SLM is auto-corrected through digital light propagation. With this method, we were able to accomplish OPC playback with a DOPC system with gross sensor-SLM misalignment by an axial displacement of up to~1.5 cm, rotation and tip/tilt of ~5° and in-plane displacement of ~5 mm (dependent on the physical size of the sensor and the SLM). Our auto-alignment method robustly achieved a DOPC playback peak-to-background ratio (PBR) corresponding to more than ~30 % of the theoretical maximum. As an additional advantage, the auto-alignment procedure can be easily performed at will and, as such, allows us to correct for small mechanical drifts within the DOPC systems, thus overcoming a previously major DOPC system vulnerability. We believe that this reported method for implementing robust DOPC systems will broaden the practical utility of DOPC systems.
NASA Astrophysics Data System (ADS)
Berdalovic, I.; Bates, R.; Buttar, C.; Cardella, R.; Egidos Plaja, N.; Hemperek, T.; Hiti, B.; van Hoorne, J. W.; Kugathasan, T.; Mandic, I.; Maneuski, D.; Marin Tobon, C. A.; Moustakas, K.; Musa, L.; Pernegger, H.; Riedler, P.; Riegel, C.; Schaefer, D.; Schioppa, E. J.; Sharma, A.; Snoeys, W.; Solans Sanchez, C.; Wang, T.; Wermes, N.
2018-01-01
The upgrade of the ATLAS tracking detector (ITk) for the High-Luminosity Large Hadron Collider at CERN requires the development of novel radiation hard silicon sensor technologies. Latest developments in CMOS sensor processing offer the possibility of combining high-resistivity substrates with on-chip high-voltage biasing to achieve a large depleted active sensor volume. We have characterised depleted monolithic active pixel sensors (DMAPS), which were produced in a novel modified imaging process implemented in the TowerJazz 180 nm CMOS process in the framework of the monolithic sensor development for the ALICE experiment. Sensors fabricated in this modified process feature full depletion of the sensitive layer, a sensor capacitance of only a few fF and radiation tolerance up to 1015 neq/cm2. This paper summarises the measurements of charge collection properties in beam tests and in the laboratory using radioactive sources and edge TCT. The results of these measurements show significantly improved radiation hardness obtained for sensors manufactured using the modified process. This has opened the way to the design of two large scale demonstrators for the ATLAS ITk. To achieve a design compatible with the requirements of the outer pixel layers of the tracker, a charge sensitive front-end taking 500 nA from a 1.8 V supply is combined with a fast digital readout architecture. The low-power front-end with a 25 ns time resolution exploits the low sensor capacitance to reduce noise and analogue power, while the implemented readout architectures minimise power by reducing the digital activity.
3D digital image correlation methods for full-field vibration measurement
NASA Astrophysics Data System (ADS)
Helfrick, Mark N.; Niezrecki, Christopher; Avitabile, Peter; Schmidt, Timothy
2011-04-01
In the area of modal test/analysis/correlation, significant effort has been expended over the past twenty years in order to make reduced models and to expand test data for correlation and eventual updating of the finite element models. This has been restricted by vibration measurements which are traditionally limited to the location of relatively few applied sensors. Advances in computers and digital imaging technology have allowed 3D digital image correlation (DIC) methods to measure the shape and deformation of a vibrating structure. This technique allows for full-field measurement of structural response, thus providing a wealth of simultaneous test data. This paper presents some preliminary results for the test/analysis/correlation of data measured using the DIC approach along with traditional accelerometers and a scanning laser vibrometer for comparison to a finite element model. The results indicate that all three approaches correlated well with the finite element model and provide validation for the DIC approach for full-field vibration measurement. Some of the advantages and limitations of the technique are presented and discussed.
Wongniramaikul, Worawit; Limsakul, Wadcharawadee; Choodum, Aree
2018-05-30
A biodegradable colorimetric film was fabricated on the lid of portable tube for in-tube formaldehyde detection. Based on the entrapment of colorimetric reagents within a thin film of tapioca starch, the yellow reaction product was observed with formaldehyde. Intensity of the blue channel from the digital image of yellow product showed a linear relationship in the range of 0-25 mg L -1 with low detection limit of 0.7 ± 0.1 mg L -1 . Inter-day precision of 0.61-3.10%RSD were obtained with less than 4.2% relative error from control samples. The developed method was applied for various food samples in Phuket and formaldehyde concentration range was non-detectable to 1.413 mg kg -1 . The quantified concentrations of formaldehyde in fish and squid samples provided relative errors of -7.7% and +10.8% compared to spectrophotometry. This low cost sensor (∼0.04 USD/test) with digital image colorimetry was thus an effective alternative for formaldehyde detection in food sample. Copyright © 2018 Elsevier Ltd. All rights reserved.
Transparent Fingerprint Sensor System for Large Flat Panel Display.
Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk; Lee, Myunghee
2018-01-19
In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger's ridges and valleys through the fingerprint sensor array.
Transparent Fingerprint Sensor System for Large Flat Panel Display
Seo, Wonkuk; Pi, Jae-Eun; Cho, Sung Haeung; Kang, Seung-Youl; Ahn, Seong-Deok; Hwang, Chi-Sun; Jeon, Ho-Sik; Kim, Jong-Uk
2018-01-01
In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger’s ridges and valleys through the fingerprint sensor array. PMID:29351218
Smart Sensors for Launch Vehicles
NASA Astrophysics Data System (ADS)
Ray, Sabooj; Mathews, Sheeja; Abraham, Sheena; Pradeep, N.; Vinod, P.
2017-12-01
Smart Sensors bring a paradigm shift in the data acquisition mechanism adopted for launch vehicle telemetry system. The sensors integrate signal conditioners, digitizers and communication systems to give digital output from the measurement location. Multiple sensors communicate with a centralized node over a common digital data bus. An in-built microcontroller gives the sensor embedded intelligence to carry out corrective action for sensor inaccuracies. A smart pressure sensor has been realized and flight-proven to increase the reliability as well as simplicity in integration so as to obtain improved data output. Miniaturization is achieved by innovative packaging. This work discusses the construction, working and flight performance of such a sensor.
BIOME: An Ecosystem Remote Sensor Based on Imaging Interferometry
NASA Technical Reports Server (NTRS)
Peterson, David L.; Hammer, Philip; Smith, William H.; Lawless, James G. (Technical Monitor)
1994-01-01
Until recent times, optical remote sensing of ecosystem properties from space has been limited to broad band multispectral scanners such as Landsat and AVHRR. While these sensor data can be used to derive important information about ecosystem parameters, they are very limited for measuring key biogeochemical cycling parameters such as the chemical content of plant canopies. Such parameters, for example the lignin and nitrogen contents, are potentially amenable to measurements by very high spectral resolution instruments using a spectroscopic approach. Airborne sensors based on grating imaging spectrometers gave the first promise of such potential but the recent decision not to deploy the space version has left the community without many alternatives. In the past few years, advancements in high performance deep well digital sensor arrays coupled with a patented design for a two-beam interferometer has produced an entirely new design for acquiring imaging spectroscopic data at the signal to noise levels necessary for quantitatively estimating chemical composition (1000:1 at 2 microns). This design has been assembled as a laboratory instrument and the principles demonstrated for acquiring remote scenes. An airborne instrument is in production and spaceborne sensors being proposed. The instrument is extremely promising because of its low cost, lower power requirements, very low weight, simplicity (no moving parts), and high performance. For these reasons, we have called it the first instrument optimized for ecosystem studies as part of a Biological Imaging and Observation Mission to Earth (BIOME).
In-situ characterization of wildland fire behavior
Bret Butler; D. Jimenez; J. Forthofer; Paul Sopko; K. Shannon; Jim Reardon
2010-01-01
A system consisting of two enclosures has been developed to characterize wildand fire behavior: The first enclosure is a sensor/data logger combination that measures and records convective/radiant energy released by the fire. The second is a digital video camera housed in a fire proof enclosure that records visual images of fire behavior. Together this system provides...
Meteor Film Recording with Digital Film Cameras with large CMOS Sensors
NASA Astrophysics Data System (ADS)
Slansky, P. C.
2016-12-01
In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.
Knowledge-based imaging-sensor fusion system
NASA Technical Reports Server (NTRS)
Westrom, George
1989-01-01
An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.
Fourier transform digital holographic adaptive optics imaging system
Liu, Changgeng; Yu, Xiao; Kim, Myung K.
2013-01-01
A Fourier transform digital holographic adaptive optics imaging system and its basic principles are proposed. The CCD is put at the exact Fourier transform plane of the pupil of the eye lens. The spherical curvature introduced by the optics except the eye lens itself is eliminated. The CCD is also at image plane of the target. The point-spread function of the system is directly recorded, making it easier to determine the correct guide-star hologram. Also, the light signal will be stronger at the CCD, especially for phase-aberration sensing. Numerical propagation is avoided. The sensor aperture has nothing to do with the resolution and the possibility of using low coherence or incoherent illumination is opened. The system becomes more efficient and flexible. Although it is intended for ophthalmic use, it also shows potential application in microscopy. The robustness and feasibility of this compact system are demonstrated by simulations and experiments using scattering objects. PMID:23262541
End-to-end learning for digital hologram reconstruction
NASA Astrophysics Data System (ADS)
Xu, Zhimin; Zuo, Si; Lam, Edmund Y.
2018-02-01
Digital holography is a well-known method to perform three-dimensional imaging by recording the light wavefront information originating from the object. Not only the intensity, but also the phase distribution of the wavefront can then be computed from the recorded hologram in the numerical reconstruction process. However, the reconstructions via the traditional methods suffer from various artifacts caused by twin-image, zero-order term, and noise from image sensors. Here we demonstrate that an end-to-end deep neural network (DNN) can learn to perform both intensity and phase recovery directly from an intensity-only hologram. We experimentally show that the artifacts can be effectively suppressed. Meanwhile, our network doesn't need any preprocessing for initialization, and is comparably fast to train and test, in comparison with the recently published learning-based method. In addition, we validate that the performance improvement can be achieved by introducing a prior on sparsity.
Infrared imagery acquisition process supporting simulation and real image training
NASA Astrophysics Data System (ADS)
O'Connor, John
2012-05-01
The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.
A novel camera localization system for extending three-dimensional digital image correlation
NASA Astrophysics Data System (ADS)
Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher
2018-03-01
The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.
Circuitry, systems and methods for detecting magnetic fields
Kotter, Dale K [Shelley, ID; Spencer, David F [Idaho Falls, ID; Roybal, Lyle G [Idaho Falls, ID; Rohrbaugh, David T [Idaho Falls, ID
2010-09-14
Circuitry for detecting magnetic fields includes a first magnetoresistive sensor and a second magnetoresistive sensor configured to form a gradiometer. The circuitry includes a digital signal processor and a first feedback loop coupled between the first magnetoresistive sensor and the digital signal processor. A second feedback loop which is discrete from the first feedback loop is coupled between the second magnetoresistive sensor and the digital signal processor.
Digital pyramid wavefront sensor with tunable modulation.
Akondi, Vyas; Castillo, Sara; Vohnsen, Brian
2013-07-29
The pyramid wavefront sensor is known for its high sensitivity and dynamic range that can be tuned by mechanically altering its modulation amplitude. Here, a novel modulating digital scheme employing a reflecting phase only spatial light modulator is demonstrated. The use of the modulator allows an easy reconfigurable pyramid with digital control of the apex angle and modulation geometry without the need of any mechanically moving parts. Aberrations introduced by a 140-actuator deformable mirror were simultaneously sensed with the help of a commercial Hartmann-Shack wavefront sensor. The wavefronts reconstructed using the digital pyramid wavefront sensor matched very closely with those sensed by the Hartmann-Shack. It is noted that a tunable modulation is necessary to operate the wavefront sensor in the linear regime and to accurately sense aberrations. Through simulations, it is shown that the wavefront sensor can be extended to astronomical applications as well. This novel digital pyramid wavefront sensor has the potential to become an attractive option in both open and closed loop adaptive optics systems.
Solid-state Image Sensor with Focal-plane Digital Photon-counting Pixel Array
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Pain, Bedabrata
1997-01-01
A solid-state focal-plane imaging system comprises an NxN array of high gain. low-noise unit cells. each unit cell being connected to a different one of photovoltaic detector diodes, one for each unit cell, interspersed in the array for ultra low level image detection and a plurality of digital counters coupled to the outputs of the unit cell by a multiplexer(either a separate counter for each unit cell or a row of N of counters time shared with N rows of digital counters). Each unit cell includes two self-biasing cascode amplifiers in cascade for a high charge-to-voltage conversion gain (greater than 1mV/e(-)) and an electronic switch to reset input capacitance to a reference potential in order to be able to discriminate detection of an incident photon by the photoelectron (e(-))generated in the detector diode at the input of the first cascode amplifier in order to count incident photons individually in a digital counter connected to the output of the second cascade amplifier. Reseting the input capacitance and initiating self-biasing of the amplifiers occurs every clock cycle of an integratng period to enable ultralow light level image detection by the may of photovoltaic detector diodes under such ultralow light level conditions that the photon flux will statistically provide only a single photon at a time incident on anyone detector diode during any clock cycle.
Griffiths, J A; Chen, D; Turchetta, R; Royle, G J
2011-03-01
An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 10(3) to 1.6 at a gain of 3.93 × 10(1). The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 10(3), dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.
Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor
NASA Astrophysics Data System (ADS)
Griffiths, J. A.; Chen, D.; Turchetta, R.; Royle, G. J.
2011-03-01
An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 103 to 1.6 at a gain of 3.93 × 101. The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 103, dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadowski, F.G.; Covington, S.J.
1987-01-01
Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT high-resolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear power plant emergency at Chernobyl in the Soviet Ukraine. The results of the data processing and analysis illustrate the spectral and spatial capabilities of the two sensor systems and provide information about the severity and duration of the events occurring at the power plant site.
HYMOSS signal processing for pushbroom spectral imaging
NASA Technical Reports Server (NTRS)
Ludwig, David E.
1991-01-01
The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.
HYMOSS signal processing for pushbroom spectral imaging
NASA Astrophysics Data System (ADS)
Ludwig, David E.
1991-06-01
The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
Study on key techniques for camera-based hydrological record image digitization
NASA Astrophysics Data System (ADS)
Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping
2015-10-01
With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.
An objective protocol for comparing the noise performance of silver halide film and digital sensor
NASA Astrophysics Data System (ADS)
Cao, Frédéric; Guichard, Frédéric; Hornung, Hervé; Tessière, Régis
2012-01-01
Digital sensors have obviously invaded the photography mass market. However, some photographers with very high expectancy still use silver halide film. Are they only nostalgic reluctant to technology or is there more than meets the eye? The answer is not so easy if we remark that, at the end of the golden age, films were actually scanned before development. Nowadays film users have adopted digital technology and scan their film to take advantage from digital processing afterwards. Therefore, it is legitimate to evaluate silver halide film "with a digital eye", with the assumption that processing can be applied as for a digital camera. The article will describe in details the operations we need to consider the film as a RAW digital sensor. In particular, we have to account for the film characteristic curve, the autocorrelation of the noise (related to film grain) and the sampling of the digital sensor (related to Bayer filter array). We also describe the protocol that was set, from shooting to scanning. We then present and interpret the results of sensor response, signal to noise ratio and dynamic range.
Determining approximate age of digital images using sensor defects
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2011-02-01
The goal of temporal forensics is to establish temporal relationship among two or more pieces of evidence. In this paper, we focus on digital images and describe a method using which an analyst can estimate the acquisition time of an image given a set of other images from the same camera whose time ordering is known. This is achieved by first estimating the parameters of pixel defects, including their onsets, and then detecting their presence in the image under investigation. Both estimators are constructed using the maximum-likelihood principle. The accuracy and limitations of this approach are illustrated on experiments with three cameras. Forensic and law-enforcement analysts are expected to benefit from this technique in situations when the temporal data stored in the EXIF header is lost due to processing or editing images off-line or when the header cannot be trusted. Reliable methods for establishing temporal order between individual pieces of evidence can help reveal deception attempts of an adversary or a criminal. The causal relationship may also provide information about the whereabouts of the photographer.
3D motion picture of transparent gas flow by parallel phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Awatsuji, Yasuhiro; Fukuda, Takahito; Wang, Yexin; Xia, Peng; Kakue, Takashi; Nishio, Kenzo; Matoba, Osamu
2018-03-01
Parallel phase-shifting digital holography is a technique capable of recording three-dimensional (3D) motion picture of dynamic object, quantitatively. This technique can record single hologram of an object with an image sensor having a phase-shift array device and reconstructs the instantaneous 3D image of the object with a computer. In this technique, a single hologram in which the multiple holograms required for phase-shifting digital holography are multiplexed by using space-division multiplexing technique pixel by pixel. We demonstrate 3D motion picture of dynamic and transparent gas flow recorded and reconstructed by the technique. A compressed air duster was used to generate the gas flow. A motion picture of the hologram of the gas flow was recorded at 180,000 frames/s by parallel phase-shifting digital holography. The phase motion picture of the gas flow was reconstructed from the motion picture of the hologram. The Abel inversion was applied to the phase motion picture and then the 3D motion picture of the gas flow was obtained.
Enhancement of low light level images using color-plus-mono dual camera.
Jung, Yong Ju
2017-05-15
In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.
Automation in photogrammetry: Recent developments and applications (1972-1976)
Thompson, M.M.; Mikhail, E.M.
1976-01-01
An overview of recent developments in the automation of photogrammetry in various countries is presented. Conclusions regarding automated photogrammetry reached at the 1972 Congress in Ottawa are reviewed first as a background for examining the developments of 1972-1976. Applications are described for each country reporting significant developments. Among fifteen conclusions listed are statements concerning: the widespread practice of equipping existing stereoplotters with simple digitizers; the growing tendency to use minicomputers on-line with stereoplotters; the optimization of production of digital terrain models by progressive sampling in stereomodels; the potential of digitization of a photogrammetric model by density correlation on epipolar lines; the capabilities and economic aspects of advanced systems which permit simultaneous production of orthophotos, contours, and digital terrain models; the economy of off-line orthophoto systems; applications of digital image processing; automation by optical techniques; applications of sensors other than photographic imagery, and the role of photogrammetric phases in a completely automated cartographic system. ?? 1976.
Seamless positioning and navigation by using geo-referenced images and multi-sensor data.
Li, Xun; Wang, Jinling; Li, Tao
2013-07-12
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments.
NASA Astrophysics Data System (ADS)
Kusterer, M. B.; Mitchell, D. G.; Krimigis, S. M.; Vandegriff, J. D.
2014-12-01
Having been at Saturn for over a decade, the MIMI instrument on Cassini has created a rich dataset containing many details about Saturn's magnetosphere. In particular, the images of energetic neutral atoms (ENAs) taken by the Ion and Neutral Camera (INCA) offer a global perspective on Saturn's plasma environment. The MIMI team is now regularly making movies (in MP4 format) consisting of consecutive ENA images. The movies correct for spacecraft attitude changes by projecting the images (whose viewing angles can substantially vary from one image to the next) into a fixed inertial frame that makes it easy to view spatial features evolving in time. These movies are now being delivered to the PDS and are also available at the MIMI team web site. Several other higher order products are now also available, including 20-day energy-time spectrograms for the Charge-Energy-Mass Spectrometer (CHEMS) sensor, and daily energy-time spectrograms for the Low Energy Magnetospheric Measurements system (LEMMS) sensor. All spectrograms are available as plots or digital data in ASCII format. For all MIMI sensors, a Data User Guide is also available. This paper presents details and examples covering the specifics of MIMI higher order data products. URL: http://cassini-mimi.jhuapl.edu/
Seamless Positioning and Navigation by Using Geo-Referenced Images and Multi-Sensor Data
Li, Xun; Wang, Jinling; Li, Tao
2013-01-01
Ubiquitous positioning is considered to be a highly demanding application for today's Location-Based Services (LBS). While satellite-based navigation has achieved great advances in the past few decades, positioning and navigation in indoor scenarios and deep urban areas has remained a challenging topic of substantial research interest. Various strategies have been adopted to fill this gap, within which vision-based methods have attracted growing attention due to the widespread use of cameras on mobile devices. However, current vision-based methods using image processing have yet to revealed their full potential for navigation applications and are insufficient in many aspects. Therefore in this paper, we present a hybrid image-based positioning system that is intended to provide seamless position solution in six degrees of freedom (6DoF) for location-based services in both outdoor and indoor environments. It mainly uses visual sensor input to match with geo-referenced images for image-based positioning resolution, and also takes advantage of multiple onboard sensors, including the built-in GPS receiver and digital compass to assist visual methods. Experiments demonstrate that such a system can greatly improve the position accuracy for areas where the GPS signal is negatively affected (such as in urban canyons), and it also provides excellent position accuracy for indoor environments. PMID:23857267
NASA's Earth Science Use of Commercially Availiable Remote Sensing Datasets: Cover Image
NASA Technical Reports Server (NTRS)
Underwood, Lauren W.; Goward, Samuel N.; Fearon, Matthew G.; Fletcher, Rose; Garvin, Jim; Hurtt, George
2008-01-01
The cover image incorporates high resolution stereo pairs acquired from the DigitalGlobe(R) QuickBird sensor. It shows a digital elevation model of Meteor Crater, Arizona at approximately 1.3 meter point-spacing. Image analysts used the Leica Photogrammetry Suite to produce the DEM. The outside portion was computed from two QuickBird panchromatic scenes acquired October 2006, while an Optech laser scan dataset was used for the crater s interior elevations. The crater s terrain model and image drape were created in a NASA Constellation Program project focused on simulating lunar surface environments for prototyping and testing lunar surface mission analysis and planning tools. This work exemplifies NASA s Scientific Data Purchase legacy and commercial high resolution imagery applications, as scientists use commercial high resolution data to examine lunar analog Earth landscapes for advanced planning and trade studies for future lunar surface activities. Other applications include landscape dynamics related to volcanism, hydrologic events, climate change, and ice movement.
Update on the biological effects of ionizing radiation, relative dose factors and radiation hygiene.
White, Stuart C; Mallya, S M
2012-03-01
Diagnostic imaging is an indispensable part of contemporary medical and dental practice. Over the last few decades there has been a dramatic increase in the use of ionizing radiation for diagnostic imaging. The carcinogenic effects of high-dose exposure are well known. Does diagnostic radiation rarely cause cancer? We don't know but we should act as if it does. Accordingly, dentists should select patients wisely - only make radiographs when there is patient-specific reason to believe there is a reasonable expectation the radiograph will offer unique information influencing diagnosis or treatment. Low-dose examinations should be made: intraoral imaging - use fast film or digital sensors, thyroid collars, rectangular collimation; panoramic and lateral cephalometric imaging - use digital systems or rare-earth film screen combinations; and cone beam computed tomography - use low-dose machines, restrict field size to region of interest, reduce mA and length of exposure arc as appropriate. © 2012 Australian Dental Association.
NASA Astrophysics Data System (ADS)
Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark
2013-05-01
The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.
Saucedo-A, Tonatiuh; De la Torre-Ibarra, M H; Santoyo, F Mendoza; Moreno, Ivan
2010-09-13
The use of digital holographic interferometry for 3D measurements using simultaneously three illumination directions was demonstrated by Saucedo et al. (Optics Express 14(4) 2006). The technique records two consecutive images where each one contains three holograms in it, e.g., one before the deformation and one after the deformation. A short coherence length laser must be used to obtain the simultaneous 3D information from the same laser source. In this manuscript we present an extension of this technique now illuminating simultaneously with three different lasers at 458, 532 and 633 nm, and using only one high resolution monochrome CMOS sensor. This new configuration gives the opportunity to use long coherence length lasers allowing the measurement of large object areas. A series of digital holographic interferograms are recorded and the information corresponding to each laser is isolated in the Fourier spectral domain where the corresponding phase difference is calculated. Experimental results render the orthogonal displacement components u, v and w during a simple load deformation.
SEM contour based metrology for microlens process studies in CMOS image sensor technologies
NASA Astrophysics Data System (ADS)
Lakcher, Amine; Ostrovsky, Alain; Le-Gratiet, Bertrand; Berthier, Ludovic; Bidault, Laurent; Ducoté, Julien; Jamin-Mornet, Clémence; Mortini, Etienne; Besacier, Maxime
2018-03-01
From the first digital cameras which appeared during the 70s to cameras of current smartphones, image sensors have undergone significant technological development in the last decades. The development of CMOS image sensor technologies in the 90s has been the main driver of the recent progresses. The main component of an image sensor is the pixel. A pixel contains a photodiode connected to transistors but only the photodiode area is light sensitive. This results in a significant loss of efficiency. To solve this issue, microlenses are used to focus the incident light on the photodiode. A microlens array is made out of a transparent material and has a spherical cap shape. To obtain this spherical shape, a lithography process is performed to generate resist blocks which are then annealed above their glass transition temperature (reflow). Even if the dimensions to consider are higher than in advanced IC nodes, microlenses are sensitive to process variability during lithography and reflow. A good control of the microlens dimensions is key to optimize the process and thus the performance of the final product. The purpose of this paper is to apply SEM contour metrology [1, 2, 3, 4] to microlenses in order to develop a relevant monitoring methodology and to propose new metrics to engineers to evaluate their process or optimize the design of the microlens arrays.
Imbe, Masatoshi
2018-03-20
The optical configuration proposed in this paper consists of a 4-f optical setup with the wavefront modulation device on the Fourier plane, such as a concave mirror and a spatial light modulator. The transverse magnification of reconstructed images with the proposed configuration is independent of locations of an object and an image sensor; therefore, reconstructed images of object(s) at different distances can be scaled with a fixed transverse magnification. It is yielded based on Fourier optics and mathematically verified with the optical matrix method. Numerical simulation results and experimental results are also given to confirm the fixity of the reconstructed images.
An Efficient Image Recovery Algorithm for Diffraction Tomography Systems
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1993-01-01
A diffraction tomography system has potential application in ultrasonic medical imaging area. It is capable of achieving imagery with the ultimate resolution of one quarter the wavelength by collecting ultrasonic backscattering data from a circular array of sensors and reconstructing the object reflectivity using a digital image recovery algorithm performed by a computer. One advantage of such a system is that is allows a relatively lower frequency wave to penetrate more deeply into the object and still achieve imagery with a reasonable resolution. An efficient image recovery algorithm for the diffraction tomography system was originally developed for processing a wide beam spaceborne SAR data...
Application of Optical Imaging Techniques for Quantification of pH and O2 Dynamicsin Porous Media
NASA Astrophysics Data System (ADS)
Li, B.; Seliman, A. F.; Pales, A. R.; Liang, W.; Sams, A.; Darnault, C. J. G.; DeVol, T. A.
2016-12-01
Understanding the spatial and temporal distribution of physical and chemical parameters (e.g. pH, O2) is imperative to characterize the behavior of contaminants in a natural environment. The objectives of this research are to calibrate pH and O2 sensor foils, to develop a dual pH/O2 sensor foil, and to apply them into flow and transport experiments, in order to understand the physical and chemical parameters that control contaminant fate and transport in an unsaturated sandy porous medium. In addition, demonstration of a sensor foil that quantifies aqueous uranium concentration will be presented. Optical imaging techniques will be conducted with 2D tanks to investigate the influence of microbial exudates and plant roots on pH and O2 parameters and radionuclides transport. As a non-invasive method, the optical imaging technique utilizes optical chemical sensor films and either a digital camera or a spectrometer to capture the changes with high temporal and spatial resolutions. Sensor foils are made for different parameters by applying dyes to generate favorable fluorescence that is proportional to the parameter of interest. Preliminary results suggested that this method could detect pH ranging from 4.5 to 7.5. The result from uranium foil test with different concentrations in the range of 2 to 8 ppm indicated that a higher concentration of uranium resulted in a greater color intensity.
Design of intelligent vehicle control system based on single chip microcomputer
NASA Astrophysics Data System (ADS)
Zhang, Congwei
2018-06-01
The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.
An Evaluation of ALOS Data in Disaster Applications
NASA Astrophysics Data System (ADS)
Igarashi, Tamotsu; Igarashi, Tamotsu; Furuta, Ryoich; Ono, Makoto
ALOS is the advanced land observing satellite, providing image data from onboard sensors; PRISM, AVNIR-2 and PALSAR. PRISM is the sensor of panchromatic stereo, high resolution three-line-scanner to characterize the earth surface. The accuracy of position in image and height of Digital Surface Model (DSM) are high, therefore the geographic information extraction is improved in the field of disaster applications with providing images of disaster area. Especially pan-sharpened 3D image composed with PRISM and the four-band visible near-infrared radiometer AVNIR-2 data is expected to provide information to understand the geographic and topographic feature. PALSAR is the advanced multi-functional synthetic aperture radar (SAR) operated in L-band, appropriate for the use of land surface feature characterization. PALSAR has many improvements from JERS-1/SAR, such as high sensitivity, having high resolution, polarimetric and scan SAR observation modes. PALSAR is also applicable for SAR interferometry processing. This paper describes the evaluation of ALOS data characteristic from the view point of disaster applications, through some exercise applications.
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
A digital pixel cell for address event representation image convolution processing
NASA Astrophysics Data System (ADS)
Camunas-Mesa, Luis; Acosta-Jimenez, Antonio; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe
2005-06-01
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number of neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate events according to their information levels. Neurons with more information (activity, derivative of activities, contrast, motion, edges,...) generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. AER technology has been used and reported for the implementation of various type of image sensors or retinae: luminance with local agc, contrast retinae, motion retinae,... Also, there has been a proposal for realizing programmable kernel image convolution chips. Such convolution chips would contain an array of pixels that perform weighted addition of events. Once a pixel has added sufficient event contributions to reach a fixed threshold, the pixel fires an event, which is then routed out of the chip for further processing. Such convolution chips have been proposed to be implemented using pulsed current mode mixed analog and digital circuit techniques. In this paper we present a fully digital pixel implementation to perform the weighted additions and fire the events. This way, for a given technology, there is a fully digital implementation reference against which compare the mixed signal implementations. We have designed, implemented and tested a fully digital AER convolution pixel. This pixel will be used to implement a full AER convolution chip for programmable kernel image convolution processing.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-11-17
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.
Natural Crack Sizing Based on Eddy Current Image and Electromagnetic Field Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endo, H.; Uchimoto, T.; Takagi, T.
2006-03-06
An eddy current testing (ECT) system with multi-coil type probes is applied to size up cracks fabricated on austenite stainless plates. We have developed muti-channel ECT system to produce data as digital images. The probes consist of transmit-receive type sensors as elements to classify crack directions, working as two scan direction modes simultaneously. Template matching applied to the ECT images determines regions of interest in sizing up cracks. Finite element based inversion sizes up the crack depth from the measured ECT signal. The present paper demonstrates this approach for fatigue crack and stress corrosion cracking.
NASA Technical Reports Server (NTRS)
Albus, James S.
1961-01-01
The solar aspect sensor described herein performs the analog-to-digital conversion of data optically. To accomplish this, it uses a binary "Gray code" light mask to produce a digital indication, in vehicle-fixed coordinates, of the elevation and azimuth angles of incident light from the sun. This digital solar aspect sensor system, in Explorer X, provided measurements of both elevation and azimuth angles to +/- 2 degrees at a distance of over 140,000 statute miles.
Correcting spacecraft jitter in HiRISE images
Sutton, S. S.; Boyd, A.K.; Kirk, Randolph L.; Cook, Debbie; Backer, Jean; Fennema, A.; Heyd, R.; McEwen, A.S.; Mirchandani, S.D.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.
2017-01-01
Mechanical oscillations or vibrations on spacecraft, also called pointing jitter, cause geometric distortions and/or smear in high resolution digital images acquired from orbit. Geometric distortion is especially a problem with pushbroom type sensors, such as the High Resolution Imaging Science Experiment (HiRISE) instrument on board the Mars Reconnaissance Orbiter (MRO). Geometric distortions occur at a range of frequencies that may not be obvious in the image products, but can cause problems with stereo image correlation in the production of digital elevation models, and in measuring surface changes over time in orthorectified images. The HiRISE focal plane comprises a staggered array of fourteen charge-coupled devices (CCDs) with pixel IFOV of 1 microradian. The high spatial resolution of HiRISE makes it both sensitive to, and an excellent recorder of jitter. We present an algorithm using Fourier analysis to resolve the jitter function for a HiRISE image that is then used to update instrument pointing information to remove geometric distortions from the image. Implementation of the jitter analysis and image correction is performed on selected HiRISE images. Resulting corrected images and updated pointing information are made available to the public. Results show marked reduction of geometric distortions. This work has applications to similar cameras operating now, and to the design of future instruments (such as the Europa Imaging System).
International Space Station from Space Shuttle Endeavour
NASA Technical Reports Server (NTRS)
2007-01-01
The crew of the Space Shuttle Endeavour took this spectacular image of the International Space Station during the STS118 mission, August 8-21, 2007. The image was acquired by an astronaut through one of the crew cabin windows, looking back over the length of the Shuttle. This oblique (looking at an angle from vertical, rather than straight down towards the Earth) image was acquired almost one hour after late inspection activities had begun. The sensor head of the Orbiter Boom Sensor System is visible at image top left. The entire Space Station is visible at image bottom center, set against the backdrop of the Ionian Sea approximately 330 kilometers below it. Other visible features of the southeastern Mediterranean region include the toe and heel of Italy's 'boot' at image lower left, and the western coastlines of Albania and Greece, which extend across image center. Farther towards the horizon, the Aegean and Black Seas are also visible. Featured astronaut photograph STS118-E-9469 was acquired by the STS-118 crew on August 19, 2007, with a Kodak 760C digital camera using a 28 mm lens, and is provided by the ISS Crew Earth Observations experiment and Image Science and Analysis Laboratory at Johnson Space Center.
NASA Astrophysics Data System (ADS)
Streicher, Michael W.
A nuclear weapon detonation remains one of the gravest threats to the global community. Although the likelihood of a nuclear event remains small, the economic and political ramifications of an event are vast. The surest way to reduce the probability of an incident is to account for the special nuclear materials (SNM) which can be used to produce a nuclear weapon. Materials which can be used to manufacture a radiological dispersion device ("dirty bomb") must also be monitored. Rapidly-deployable, commercially-available, room-temperature imaging gamma-ray spectrometers are improving the ability of authorities to intelligently and quickly respond to threats. New electronics which digitally-sample the radiation-induced signals in CdZnTe detectors have expanded the capabilities of these sensors. This thesis explores national security applications where digital readout of CdZnTe detectors significantly enhances capabilities. Radioactive sources can be detected more quickly using digitally-sampled CdZnTe detector due to the improved energy resolution. The excellent energy resolution also improves the accuracy of measurements of uranium enrichment and allows users to measure plutonium grade. Small differences in the recorded gamma-ray energy spectrum can be used to estimate the effective atomic number and mass thickness of materials shielding SNM sources. Improved position resolution of gamma-ray interactions through digital readout allows high resolution gamma-ray images of SNM revealing information about the source configuration. CdZnTe sensors can detect the presence of neutrons, indirectly, through measurement of gamma rays released during capture of thermal neutrons by Cd-113 or inelastic scattering with any constituent nuclei. Fast neutrons, such as those released following fission, can be directly detected through elastic scattering interactions in the detector. Neutrons are a strong indicator of fissile material, and the background neutron rate is much lower than the gamma-ray background rate. Neutrons can more easily penetrate shielding materials as well which can greatly aid in the detection of shielded SNM. Digital CdZnTe readout enables the sensors to maintain excellent energy resolution at high count rates. Pulse pile-up and preamplifier decay can be monitored and corrected for on an event-by-event basis limiting energy resolution degradation in dose rates higher than 100 mR/hr. Finally, new iterations of the digital electronics have enhanced gamma-ray detection capabilities at high photon energies. Currently, gamma rays with energy up to 4.4 MeV have been detected. High-energy photon detection is critical for many proposed active interrogation systems.
Single-Photon Detectors for Time-of-Flight Range Imaging
NASA Astrophysics Data System (ADS)
Stoppa, David; Simoni, Andrea
We live in a three-dimensional (3D) world and thanks to the stereoscopic vision provided by our two eyes, in combination with the powerful neural network of the brain we are able to perceive the distance of the objects. Nevertheless, despite the huge market volume of digital cameras, solid-state image sensors can capture only a two-dimensional (2D) projection, of the scene under observation, losing a variable of paramount importance, i.e., the scene depth. On the contrary, 3D vision tools could offer amazing possibilities of improvement in many areas thanks to the increased accuracy and reliability of the models representing the environment. Among the great variety of distance measuring techniques and detection systems available, this chapter will treat only the emerging niche of solid-state, scannerless systems based on the TOF principle and using a detector SPAD-based pixels. The chapter is organized into three main parts. At first, TOF systems and measuring techniques will be described. In the second part, most meaningful sensor architectures for scannerless TOF distance measurements will be analyzed, focusing onto the circuital building blocks required by time-resolved image sensors. Finally, a performance summary is provided and a perspective view for the near future developments of SPAD-TOF sensors is given.
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-01-01
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255
A multi-resolution approach for an automated fusion of different low-cost 3D sensors.
Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner
2014-04-24
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.
Data simulation for the Lightning Imaging Sensor (LIS)
NASA Technical Reports Server (NTRS)
Boeck, William L.
1991-01-01
This project aims to build a data analysis system that will utilize existing video tape scenes of lightning as viewed from space. The resultant data will be used for the design and development of the Lightning Imaging Sensor (LIS) software and algorithm analysis. The desire for statistically significant metrics implies that a large data set needs to be analyzed. Before 1990 the quality and quantity of video was insufficient to build a usable data set. At this point in time, there is usable data from missions STS-34, STS-32, STS-31, STS-41, STS-37, and STS-39. During the summer of 1990, a manual analysis system was developed to demonstrate that the video analysis is feasible and to identify techniques to deduce information that was not directly available. Because the closed circuit television system used on the space shuttle was intended for documentary TV, the current value of the camera focal length and pointing orientation, which are needed for photoanalysis, are not included in the system data. A large effort was needed to discover ancillary data sources as well as develop indirect methods to estimate the necessary parameters. Any data system coping with full motion video faces an enormous bottleneck produced by the large data production rate and the need to move and store the digitized images. The manual system bypassed the video digitizing bottleneck by using a genlock to superimpose pixel coordinates on full motion video. Because the data set had to be obtained point by point by a human operating a computer mouse, the data output rate was small. The loan and subsequent acquisition of a Abekas digital frame store with a real time digitizer moved the bottleneck from data acquisition to a problem of data transfer and storage. The semi-automated analysis procedure was developed using existing equipment and is described. A fully automated system is described in the hope that the components may come on the market at reasonable prices in the next few years.
Optical digitizing and strategies to combine different views of an optical sensor
NASA Astrophysics Data System (ADS)
Duwe, Hans P.
1997-09-01
Non-contact digitization of objects and surfaces with optical sensors based on fringe or pattern projection in combination with a CCD-camera allows a representation of surfaces with pointclouds equals x, y, z data points. To digitize the total surface of an object, it is necessary to combine the different measurement data obtained by the optical sensor from different views. Depending on the size of the object and the required accuracy of the measured data, different sensor set-ups with handling system or a combination of linear and rotation axes are described. Furthermore, strategies to match the overlapping pointclouds of a digitized object are introduced. This is very important especially for the digitization of large objects like 1:1 car models, etc. With different sensor sizes, it is possible to digitize small objects like teeth, crowns, inlays, etc. with an overall accuracy of 20 micrometer as well as large objects like car models, with a total accuracy of 0.5 mm. The various applications in the field of optical digitization are described.
NASA Astrophysics Data System (ADS)
Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.
2012-07-01
Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.
High dynamic spectroscopy using a digital micromirror device and periodic shadowing.
Kristensson, Elias; Ehn, Andreas; Berrocal, Edouard
2017-01-09
We present an optical solution called DMD-PS to boost the dynamic range of 2D imaging spectroscopic measurements up to 22 bits by incorporating a digital micromirror device (DMD) prior to detection in combination with the periodic shadowing (PS) approach. In contrast to high dynamic range (HDR), where the dynamic range is increased by recording several images at different exposure times, the current approach has the potential of improving the dynamic range from a single exposure and without saturation of the CCD sensor. In the procedure, the spectrum is imaged onto the DMD that selectively reduces the reflection from the intense spectral lines, allowing the signal from the weaker lines to be increased by a factor of 28 via longer exposure times, higher camera gains or increased laser power. This manipulation of the spectrum can either be based on a priori knowledge of the spectrum or by first performing a calibration measurement to sense the intensity distribution. The resulting benefits in detection sensitivity come, however, at the cost of strong generation of interfering stray light. To solve this issue the Periodic Shadowing technique, which is based on spatial light modulation, is also employed. In this proof-of-concept article we describe the full methodology of DMD-PS and demonstrate - using the calibration-based concept - an improvement in dynamic range by a factor of ~100 over conventional imaging spectroscopy. The dynamic range of the presented approach will directly benefit from future technological development of DMDs and camera sensors.
Decomposed Photo Response Non-Uniformity for Digital Forensic Analysis
NASA Astrophysics Data System (ADS)
Li, Yue; Li, Chang-Tsun
The last few years have seen the applications of Photo Response Non-Uniformity noise (PRNU) - a unique stochastic fingerprint of image sensors, to various types of digital forensic investigations such as source device identification and integrity verification. In this work we proposed a new way of extracting PRNU noise pattern, called Decomposed PRNU (DPRNU), by exploiting the difference between the physical andartificial color components of the photos taken by digital cameras that use a Color Filter Array for interpolating artificial components from physical ones. Experimental results presented in this work have shown the superiority of the proposed DPRNU to the commonly used version. We also proposed a new performance metrics, Corrected Positive Rate (CPR) to evaluate the performance of the common PRNU and the proposed DPRNU.
NASA Technical Reports Server (NTRS)
Badler, N. I.
1985-01-01
Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.
Applying Digital Sensor Technology: A Problem-Solving Approach
ERIC Educational Resources Information Center
Seedhouse, Paul; Knight, Dawn
2016-01-01
There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…
Aldaz, Gabriel; Shluzas, Lauren Aquino; Pickham, David; Eris, Ozgur; Sadler, Joel; Joshi, Shantanu; Leifer, Larry
2015-01-01
Chronic wounds, including pressure ulcers, compromise the health of 6.5 million Americans and pose an annual estimated burden of $25 billion to the U.S. health care system. When treating chronic wounds, clinicians must use meticulous documentation to determine wound severity and to monitor healing progress over time. Yet, current wound documentation practices using digital photography are often cumbersome and labor intensive. The process of transferring photos into Electronic Medical Records (EMRs) requires many steps and can take several days. Newer smartphone and tablet-based solutions, such as Epic Haiku, have reduced EMR upload time. However, issues still exist involving patient positioning, image-capture technique, and patient identification. In this paper, we present the development and assessment of the SnapCap System for chronic wound photography. Through leveraging the sensor capabilities of Google Glass, SnapCap enables hands-free digital image capture, and the tagging and transfer of images to a patient’s EMR. In a pilot study with wound care nurses at Stanford Hospital (n=16), we (i) examined feature preferences for hands-free digital image capture and documentation, and (ii) compared SnapCap to the state of the art in digital wound care photography, the Epic Haiku application. We used the Wilcoxon Signed-ranks test to evaluate differences in mean ranks between preference options. Preferred hands-free navigation features include barcode scanning for patient identification, Z(15) = -3.873, p < 0.001, r = 0.71, and double-blinking to take photographs, Z(13) = -3.606, p < 0.001, r = 0.71. In the comparison between SnapCap and Epic Haiku, the SnapCap System was preferred for sterile image-capture technique, Z(16) = -3.873, p < 0.001, r = 0.68. Responses were divided with respect to image quality and overall ease of use. The study’s results have contributed to the future implementation of new features aimed at enhancing mobile hands-free digital photography for chronic wound care. PMID:25902061
Estimation of saturated pixel values in digital color imaging
Zhang, Xuemei; Brainard, David H.
2007-01-01
Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
NASA Astrophysics Data System (ADS)
Pan, Feng; Deng, Yating; Ma, Xichao; Xiao, Wen
2017-11-01
Digital holographic microtomography is improved and applied to the measurements of three-dimensional refractive index distributions of fusion spliced optical fibers. Tomographic images are reconstructed from full-angle phase projection images obtained with a setup-rotation approach, in which the laser source, the optical system and the image sensor are arranged on an optical breadboard and synchronously rotated around the fixed object. For retrieving high-quality tomographic images, a numerical method is proposed to compensate the unwanted movements of the object in the lateral, axial and vertical directions during rotation. The compensation is implemented on the two-dimensional phase images instead of the sinogram. The experimental results exhibit distinctly the internal structures of fusion splices between a single-mode fiber and other fibers, including a multi-mode fiber, a panda polarization maintaining fiber, a bow-tie polarization maintaining fiber and a photonic crystal fiber. In particular, the internal structure distortion in the fusion areas can be intuitively observed, such as the expansion of the stress zones of polarization maintaining fibers, the collapse of the air holes of photonic crystal fibers, etc.
Characterization of a smartphone camera's response to ultraviolet A radiation.
Igoe, Damien; Parisi, Alfio; Carter, Brad
2013-01-01
As part of a wider study into the use of smartphones as solar ultraviolet radiation monitors, this article characterizes the ultraviolet A (UVA; 320-400 nm) response of a consumer complementary metal oxide semiconductor (CMOS)-based smartphone image sensor in a controlled laboratory environment. The CMOS image sensor in the camera possesses inherent sensitivity to UVA, and despite the attenuation due to the lens and neutral density and wavelength-specific bandpass filters, the measured relative UVA irradiances relative to the incident irradiances range from 0.0065% at 380 nm to 0.0051% at 340 nm. In addition, the sensor demonstrates a predictable response to low-intensity discrete UVA stimuli that can be modelled using the ratio of recorded digital values to the incident UVA irradiance for a given automatic exposure time, and resulting in measurement errors that are typically less than 5%. Our results support the idea that smartphones can be used for scientific monitoring of UVA radiation. © 2012 Wiley Periodicals, Inc. Photochemistry and Photobiology © 2012 The American Society of Photobiology.
A novel dynamic sensing of wearable digital textile sensor with body motion analysis.
Yang, Chang-Ming; Lin, Zhan-Sheng; Hu, Chang-Lin; Chen, Yu-Shih; Ke, Ling-Yi; Chen, Yin-Rui
2010-01-01
This work proposes an innovative textile sensor system to monitor dynamic body movement and human posture by attaching wearable digital sensors to analyze body motion. The proposed system can display and analyze signals when individuals are walking, running, veering around, walking up and down stairs, as well as falling down with a wearable monitoring system, which reacts to the coordination between the body and feet. Several digital sensor designs are embedded in clothing and wear apparel. Any pressure point can determine which activity is underway. Importantly, wearable digital sensors and a wearable monitoring system allow adaptive, real-time postures, real time velocity, acceleration, non-invasive, transmission healthcare, and point of care (POC) for home and non-clinical environments.
NASA Astrophysics Data System (ADS)
Han, Ling; Miller, Brian W.; Barrett, Harrison H.; Barber, H. Bradford; Furenlid, Lars R.
2017-09-01
iQID is an intensified quantum imaging detector developed in the Center for Gamma-Ray Imaging (CGRI). Originally called BazookaSPECT, iQID was designed for high-resolution gamma-ray imaging and preclinical gamma-ray single-photon emission computed tomography (SPECT). With the use of a columnar scintillator, an image intensifier and modern CCD/CMOS sensors, iQID cameras features outstanding intrinsic spatial resolution. In recent years, many advances have been achieved that greatly boost the performance of iQID, broadening its applications to cover nuclear and particle imaging for preclinical, clinical and homeland security settings. This paper presents an overview of the recent advances of iQID technology and its applications in preclinical and clinical scintigraphy, preclinical SPECT, particle imaging (alpha, neutron, beta, and fission fragment), and digital autoradiography.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
Direct-Solve Image-Based Wavefront Sensing
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
2009-01-01
A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.
NASA Astrophysics Data System (ADS)
Khan, Muazzam A.; Ahmad, Jawad; Javaid, Qaisar; Saqib, Nazar A.
2017-03-01
Wireless Sensor Networks (WSN) is widely deployed in monitoring of some physical activity and/or environmental conditions. Data gathered from WSN is transmitted via network to a central location for further processing. Numerous applications of WSN can be found in smart homes, intelligent buildings, health care, energy efficient smart grids and industrial control systems. In recent years, computer scientists has focused towards findings more applications of WSN in multimedia technologies, i.e. audio, video and digital images. Due to bulky nature of multimedia data, WSN process a large volume of multimedia data which significantly increases computational complexity and hence reduces battery time. With respect to battery life constraints, image compression in addition with secure transmission over a wide ranged sensor network is an emerging and challenging task in Wireless Multimedia Sensor Networks. Due to the open nature of the Internet, transmission of data must be secure through a process known as encryption. As a result, there is an intensive demand for such schemes that is energy efficient as well as highly secure since decades. In this paper, discrete wavelet-based partial image encryption scheme using hashing algorithm, chaotic maps and Hussain's S-Box is reported. The plaintext image is compressed via discrete wavelet transform and then the image is shuffled column-wise and row wise-wise via Piece-wise Linear Chaotic Map (PWLCM) and Nonlinear Chaotic Algorithm, respectively. To get higher security, initial conditions for PWLCM are made dependent on hash function. The permuted image is bitwise XORed with random matrix generated from Intertwining Logistic map. To enhance the security further, final ciphertext is obtained after substituting all elements with Hussain's substitution box. Experimental and statistical results confirm the strength of the anticipated scheme.
Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping
2009-11-10
A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.
Sensors integration for smartphone navigation: performances and future challenges
NASA Astrophysics Data System (ADS)
Aicardi, I.; Dabove, P.; Lingua, A.; Piras, M.
2014-08-01
Nowadays the modern smartphones include several sensors which are usually adopted in geomatic application, as digital camera, GNSS (Global Navigation Satellite System) receivers, inertial platform, RFID and Wi-Fi systems. In this paper the authors would like to testing the performances of internal sensors (Inertial Measurement Unit, IMU) of three modern smartphones (Samsung GalaxyS4, Samsung GalaxyS5 and iPhone4) compared to external mass-market IMU platform in order to verify their accuracy levels, in terms of positioning. Moreover, the Image Based Navigation (IBN) approach is also investigated: this approach can be very useful in hard-urban environment or for indoor positioning, as alternative to GNSS positioning. IBN allows to obtain a sub-metrical accuracy, but a special database of georeferenced images (Image DataBase, IDB) is needed, moreover it is necessary to use dedicated algorithm to resizing the images which are collected by smartphone, in order to share it with the server where is stored the IDB. Moreover, it is necessary to characterize smartphone camera lens in terms of focal length and lens distortions. The authors have developed an innovative method with respect to those available today, which has been tested in a covered area, adopting a special support where all sensors under testing have been installed. Geomatic instrument have been used to define the reference trajectory, with purpose to compare this one, with the path obtained with IBN solution. First results leads to have an horizontal and vertical accuracies better than 60 cm, respect to the reference trajectories. IBN method, sensors, test and result will be described in the paper.
A digital sedimentator for measuring erythrocyte sedimentation rate using a linear image sensor
NASA Astrophysics Data System (ADS)
Yoshikoshi, Akio; Sakanishi, Akio; Toyama, Yoshiharu
2004-11-01
A digital apparatus was fabricated to determine accurately the erythrocyte sedimentation rate (ESR) using a linear image sensor. Currently, ESR is utilized for clinical diagnosis, and in the laboratory as one of the many rheological properties of blood through the settling of red blood cells (RBCs). In this work, we aimed to measure ESR automatically using a small amount of a sample and without moving parts. The linear image sensor was placed behind a microhematocrit tube containing 36 μl of RBC suspension on a holder plate; the holder plate was fixed on an optical bench together with a tungsten lamp and an opal glass placed in front. RBC suspensions were prepared in autologous plasma with hematocrit H from 25% to 44%. The intensity profiles of transmitted light in 36 μl of RBC suspension were detected using the linear image sensor and sent to a personal computer every minute. ESR was observed at the settling interface between the plasma and RBC suspension in the profile in 1024 pixels (25 μm/pixel) along a microhematocrit tube of 25.6 mm total length for 1 h at a temperature of 37.0±0.1 °C. First, we determined the initial pixel position of the sample at the boundary with air. The boundary and the interface were defined by inflection points in the profile with 25 μm resolution. We obtained sedimentation curves that were determined by the RBC settling distance l(t) at the time t from the difference between pixel locations at the boundary and the interface. The sedimentation curves were well fitted to an empirical equation [Puccini et al., Biorheol. 14, 43 (1977)] from which we calculated the maximum sedimentation velocity smax at the time tmax. We reached tmax within 30 min at any H, and smax linearly related to the settling distance l(60) at 60 min after the start of sedimentation from 30% to 44% H with the correlation coefficient r=0.993. Thus, we may estimate conventional ESR at 1 h from smax more quickly and accurately with less effort.
NASA Astrophysics Data System (ADS)
Walczykowski, P.; Orych, A.
2013-12-01
The Treaty on Open Skies, to which Poland is a signatory from the very beginning, was signed in 1992 in Helsinki. The main principle of the Treaty is increasing the openness of military activities conducted by the States-Parties and control over respecting disarmament agreements. Responsibilities given by the Treaty are fulfilled by conducting and receiving a given number of observation flights over the territories of the Treaty signatories. Among the 34 countries currently actively taking part in this Treaty only some own certified airplanes and observation sensors. Poland is within the group of countries who do not own their own platform and therefore fulfills Treaty requirements using the Ukrainian An-30b. Primarily, the Treaty only enabled the use of analogue sensors for the acquisition of imagery data. Together with the development of digital techniques, a rise in the need for digital imagery products had been noted. Currently digital photography is being used in almost ass fields of studies and everyday life. This has lead to very rapid developments in digital sensor technologies, employing the newest and most innovative solutions. Digital imagery products have many advantages and have now almost fully replaced traditional film sensors. Digital technologies have given rise to a new era in Open Skies. The Open Skies Consultative Commission, having conducted many series of tests, signed a new Decision to the Treaty, which allows for digital aerial sensors to be used during observation flights. The main aim of this article is to design a concept of choosing digital sensors and selecting an airplane, therefore a digital aerial platform, which could be used by Poland for Open Skies purposes. A thorough analysis of airplanes currently used by the Polish Air force was conducted in terms of their specifications and the possibility of their employment for Open Skies Treaty missions. Next, an analysis was conducted of the latest aerial digital sensors offered by leading commercial manufacturers. The sensors were analyzed in terms of the accordance of their specifications with the technical requirements of the Treaty.
New technologies for HWIL testing of WFOV, large-format FPA sensor systems
NASA Astrophysics Data System (ADS)
Fink, Christopher
2016-05-01
Advancements in FPA density and associated wide-field-of-view infrared sensors (>=4000x4000 detectors) have outpaced the current-art HWIL technology. Whether testing in optical projection or digital signal injection modes, current-art technologies for infrared scene projection, digital injection interfaces, and scene generation systems simply lack the required resolution and bandwidth. For example, the L3 Cincinnati Electronics ultra-high resolution MWIR Camera deployed in some UAV reconnaissance systems features 16MP resolution at 60Hz, while the current upper limit of IR emitter arrays is ~1MP, and single-channel dual-link DVI throughput of COTs graphics cards is limited to 2560x1580 pixels at 60Hz. Moreover, there are significant challenges in real-time, closed-loop, physics-based IR scene generation for large format FPAs, including the size and spatial detail required for very large area terrains, and multi - channel low-latency synchronization to achieve the required bandwidth. In this paper, the author's team presents some of their ongoing research and technical approaches toward HWIL testing of large-format FPAs with wide-FOV optics. One approach presented is a hybrid projection/injection design, where digital signal injection is used to augment the resolution of current-art IRSPs, utilizing a multi-channel, high-fidelity physics-based IR scene simulator in conjunction with a novel image composition hardware unit, to allow projection in the foveal region of the sensor, while non-foveal regions of the sensor array are simultaneously stimulated via direct injection into the post-detector electronics.
Accuracy of Conventional and Digital Radiography in Detecting External Root Resorption
Mesgarani, Abbas; Haghanifar, Sina; Ehsani, Maryam; Yaghub, Samereh Dokhte; Bijani, Ali
2014-01-01
Introduction: External root resorption (ERR) is associated with physiological and pathological dissolution of mineralized tissues by clastic cells and radiography is one of the most important methods in its diagnosis. The aim of this experimental study was to evaluate the accuracy of conventional intraoral radiography (CR) in comparison with digital radiographic techniques, i.e. charge-coupled device (CCD) and photo-stimulable phosphor (PSP) sensors, in detection of ERR. Methods and Materials: This study was performed on 80 extracted human mandibular premolars. After taking separate initial periapical radiographs with CR technique, CCD and PSP sensors, the artificial defects resembling ERR with variable sizes were created in apical half of the mesial, distal and buccal surfaces of the teeth. Ten teeth were used as control samples without any resorption. The radiographs were then repeated with 2 different exposure times and the images were observed by 3 observers. Data were analyzed using SPSS version 17 and chi-squared and Cohen’s Kappa tests with 95% confidence interval (CI=95%). Result: The CCD had the highest percentage of correct assessment compared to the CR and PSP sensors, although the difference was not significant (P=0.39). It was shown that the higher dosage of radiation increases the accuracy of diagnosis; however, it was only significant for CCD sensor (P=0.02). Also, the accuracy of diagnosis increased with the increase in the size of lesion (P=0.001). Conclusion: Statistically significant difference was not observed for accurate detection of ERR by conventional and digital radiographic techniques. PMID:25386202
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Wide-field computational imaging of pathology slides using lens-free on-chip microscopy.
Greenbaum, Alon; Zhang, Yibo; Feizi, Alborz; Chung, Ping-Luen; Luo, Wei; Kandukuri, Shivani R; Ozcan, Aydogan
2014-12-17
Optical examination of microscale features in pathology slides is one of the gold standards to diagnose disease. However, the use of conventional light microscopes is partially limited owing to their relatively high cost, bulkiness of lens-based optics, small field of view (FOV), and requirements for lateral scanning and three-dimensional (3D) focus adjustment. We illustrate the performance of a computational lens-free, holographic on-chip microscope that uses the transport-of-intensity equation, multi-height iterative phase retrieval, and rotational field transformations to perform wide-FOV imaging of pathology samples with comparable image quality to a traditional transmission lens-based microscope. The holographically reconstructed image can be digitally focused at any depth within the object FOV (after image capture) without the need for mechanical focus adjustment and is also digitally corrected for artifacts arising from uncontrolled tilting and height variations between the sample and sensor planes. Using this lens-free on-chip microscope, we successfully imaged invasive carcinoma cells within human breast sections, Papanicolaou smears revealing a high-grade squamous intraepithelial lesion, and sickle cell anemia blood smears over a FOV of 20.5 mm(2). The resulting wide-field lens-free images had sufficient image resolution and contrast for clinical evaluation, as demonstrated by a pathologist's blinded diagnosis of breast cancer tissue samples, achieving an overall accuracy of ~99%. By providing high-resolution images of large-area pathology samples with 3D digital focus adjustment, lens-free on-chip microscopy can be useful in resource-limited and point-of-care settings. Copyright © 2014, American Association for the Advancement of Science.
NV-CMOS HD camera for day/night imaging
NASA Astrophysics Data System (ADS)
Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.
2014-06-01
SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.
Wireless sensor platform for harsh environments
NASA Technical Reports Server (NTRS)
Garverick, Steven L. (Inventor); Yu, Xinyu (Inventor); Toygur, Lemi (Inventor); He, Yunli (Inventor)
2009-01-01
Reliable and efficient sensing becomes increasingly difficult in harsher environments. A sensing module for high-temperature conditions utilizes a digital, rather than analog, implementation on a wireless platform to achieve good quality data transmission. The module comprises a sensor, integrated circuit, and antenna. The integrated circuit includes an amplifier, A/D converter, decimation filter, and digital transmitter. To operate, an analog signal is received by the sensor, amplified by the amplifier, converted into a digital signal by the A/D converter, filtered by the decimation filter to address the quantization error, and output in digital format by the digital transmitter and antenna.
Iterative current mode per pixel ADC for 3D SoftChip implementation in CMOS
NASA Astrophysics Data System (ADS)
Lachowicz, Stefan W.; Rassau, Alexander; Lee, Seung-Minh; Eshraghian, Kamran; Lee, Mike M.
2003-04-01
Mobile multimedia communication has rapidly become a significant area of research and development constantly challenging boundaries on a variety of technological fronts. The processing requirements for the capture, conversion, compression, decompression, enhancement, display, etc. of increasingly higher quality multimedia content places heavy demands even on current ULSI (ultra large scale integration) systems, particularly for mobile applications where area and power are primary considerations. The ADC presented in this paper is designed for a vertically integrated (3D) system comprising two distinct layers bonded together using Indium bump technology. The top layer is a CMOS imaging array containing analogue-to-digital converters, and a buffer memory. The bottom layer takes the form of a configurable array processor (CAP), a highly parallel array of soft programmable processors capable of carrying out complex processing tasks directly on data stored in the top plane. This paper presents a ADC scheme for the image capture plane. The analogue photocurrent or sampled voltage is transferred to the ADC via a column or a column/row bus. In the proposed system, an array of analogue-to-digital converters is distributed, so that a one-bit cell is associated with one sensor. The analogue-to-digital converters are algorithmic current-mode converters. Eight such cells are cascaded to form an 8-bit converter. Additionally, each photo-sensor is equipped with a current memory cell, and multiple conversions are performed with scaled values of the photocurrent for colour processing.
Digital tripwire: a small automated human detection system
NASA Astrophysics Data System (ADS)
Fischer, Amber D.; Redd, Emmett; Younger, A. Steven
2009-05-01
A low cost, lightweight, easily deployable imaging sensor that can dependably discriminate threats from other activities within its field of view and, only then, alert the distant duty officer by transmitting a visual confirmation of the threat would provide a valuable asset to modern defense. At present, current solutions suffer from a multitude of deficiencies - size, cost, power endurance, but most notably, an inability to assess an image and conclude that it contains a threat. The human attention span cannot maintain critical surveillance over banks of displays constantly conveying such images from the field. DigitalTripwire is a small, self-contained, automated human-detection system capable of running for 1-5 days on two AA batteries. To achieve such long endurance, the DigitalTripwire system utilizes an FPGA designed with sleep functionality. The system uses robust vision algorithms, such as a partially unsupervised innovative backgroundmodeling algorithm, which employ several data reduction strategies to operate in real-time, and achieve high detection rates. When it detects human activity, either mounted or dismounted, it sends an alert including images to notify the command center. In this paper, we describe the hardware and software design of the DigitalTripwire system. In addition, we provide detection and false alarm rates across several challenging data sets demonstrating the performance of the vision algorithms in autonomously analyzing the video stream and classifying moving objects into four primary categories - dismounted human, vehicle, non-human, or unknown. Performance results across several challenging data sets are provided.
Holographic fluorescence microscopy with incoherent digital holographic adaptive optics
NASA Astrophysics Data System (ADS)
Jang, Changwon; Kim, Jonghyun; Clark, David C.; Lee, Seungjae; Lee, Byoungho; Kim, Myung K.
2015-11-01
Introduction of adaptive optics technology into astronomy and ophthalmology has made great contributions in these fields, allowing one to recover images blurred by atmospheric turbulence or aberrations of the eye. Similar adaptive optics improvement in microscopic imaging is also of interest to researchers using various techniques. Current technology of adaptive optics typically contains three key elements: a wavefront sensor, wavefront corrector, and controller. These hardware elements tend to be bulky, expensive, and limited in resolution, involving, for example, lenslet arrays for sensing or multiactuator deformable mirrors for correcting. We have previously introduced an alternate approach based on unique capabilities of digital holography, namely direct access to the phase profile of an optical field and the ability to numerically manipulate the phase profile. We have also demonstrated that direct access and compensation of the phase profile are possible not only with conventional coherent digital holography, but also with a new type of digital holography using incoherent light: selfinterference incoherent digital holography (SIDH). The SIDH generates a complex-i.e., amplitude plus phase-hologram from one or several interferograms acquired with incoherent light, such as LEDs, lamps, sunlight, or fluorescence. The complex point spread function can be measured using guide star illumination and it allows deterministic deconvolution of the full-field image. We present experimental demonstration of aberration compensation in holographic fluorescence microscopy using SIDH. Adaptive optics by SIDH provides new tools for improved cellular fluorescence microscopy through intact tissue layers or other types of aberrant media.
Holographic fluorescence microscopy with incoherent digital holographic adaptive optics
NASA Astrophysics Data System (ADS)
Jang, Changwon; Kim, Jonghyun; Clark, David C.; Lee, Byoungho; Kim, Myung K.
2015-03-01
Introduction of adaptive optics technology into astronomy and ophthalmology has made great contributions in these fields, allowing one to recover images blurred by atmospheric turbulence or aberrations of the eye. Similar adaptive optics improvement in microscopic imaging is also of interest to researchers using various techniques. Current technology of adaptive optics typically contains three key elements: wavefront sensor, wavefront corrector and controller. These hardware elements tend to be bulky, expensive, and limited in resolution, involving, e.g., lenslet arrays for sensing or multi-acuator deformable mirrors for correcting. We have previously introduced an alternate approach to adaptive optics based on unique capabilities of digital holography, namely direct access to the phase profile of an optical field and the ability to numerically manipulate the phase profile. We have also demonstrated that direct access and compensation of the phase profile is possible not only with the conventional coherent type of digital holography, but also with a new type of digital holography using incoherent light: self-interference incoherent digital holography (SIDH). The SIDH generates complex - i.e. amplitude plus phase - hologram from one or several interferograms acquired with incoherent light, such as LEDs, lamps, sunlight, or fluorescence. The complex point spread function can be measured using a guide star illumination and it allows deterministic deconvolution of the full-field image. We present experimental demonstration of aberration compensation in holographic fluorescence microscopy using SIDH. The adaptive optics by SIDH provides new tools for improved cellular fluorescence microscopy through intact tissue layers or other types of aberrant media.
Holographic fluorescence microscopy with incoherent digital holographic adaptive optics.
Jang, Changwon; Kim, Jonghyun; Clark, David C; Lee, Seungjae; Lee, Byoungho; Kim, Myung K
2015-01-01
Introduction of adaptive optics technology into astronomy and ophthalmology has made great contributions in these fields, allowing one to recover images blurred by atmospheric turbulence or aberrations of the eye. Similar adaptive optics improvement in microscopic imaging is also of interest to researchers using various techniques. Current technology of adaptive optics typically contains three key elements: a wavefront sensor, wavefront corrector, and controller. These hardware elements tend to be bulky, expensive, and limited in resolution, involving, for example, lenslet arrays for sensing or multiactuator deformable mirrors for correcting. We have previously introduced an alternate approach based on unique capabilities of digital holography, namely direct access to the phase profile of an optical field and the ability to numerically manipulate the phase profile. We have also demonstrated that direct access and compensation of the phase profile are possible not only with conventional coherent digital holography, but also with a new type of digital holography using incoherent light: selfinterference incoherent digital holography (SIDH). The SIDH generates a complex—i.e., amplitude plus phase—hologram from one or several interferograms acquired with incoherent light, such as LEDs, lamps, sunlight, or fluorescence. The complex point spread function can be measured using guide star illumination and it allows deterministic deconvolution of the full-field image. We present experimental demonstration of aberration compensation in holographic fluorescence microscopy using SIDH. Adaptive optics by SIDH provides new tools for improved cellular fluorescence microscopy through intact tissue layers or other types of aberrant media.
Superconductor Digital-RF Receiver Systems
NASA Astrophysics Data System (ADS)
Mukhanov, Oleg A.; Kirichenko, Dmitri; Vernik, Igor V.; Filippov, Timur V.; Kirichenko, Alexander; Webber, Robert; Dotsenko, Vladimir; Talalaevskii, Andrei; Tang, Jia Cao; Sahu, Anubhav; Shevchenko, Pavel; Miller, Robert; Kaplan, Steven B.; Sarwana, Saad; Gupta, Deepnarayan
Digital superconductor electronics has been experiencing rapid maturation with the emergence of smaller-scale, lower-cost communications applications which became the major technology drivers. These applications are primarily in the area of wireless communications, radar, and surveillance as well as in imaging and sensor systems. In these areas, the fundamental advantages of superconductivity translate into system benefits through novel Digital-RF architectures with direct digitization of wide band, high frequency radio frequency (RF) signals. At the same time the availability of relatively small 4K cryocoolers has lowered the foremost market barrier for cryogenically-cooled digital electronic systems. Recently, we have achieved a major breakthrough in the development, demonstration, and successful delivery of the cryocooled superconductor digital-RF receivers directly digitizing signals in a broad range from kilohertz to gigahertz. These essentially hybrid-technology systems combine a variety of superconductor and semiconductor technologies packaged with two-stage commercial cryocoolers: cryogenic Nb mixed-signal and digital circuits based on Rapid Single Flux Quantum (RSFQ) technology, room-temperature amplifiers, FPGA processing and control circuitry. The demonstrated cryocooled digital-RF systems are the world's first and fastest directly digitizing receivers operating with live satellite signals in X-band and performing signal acquisition in HF to L-band at ˜30GHz clock frequencies.
Multi-ion detection by one-shot optical sensors using a colour digital photographic camera.
Lapresta-Fernández, Alejandro; Capitán-Vallvey, Luis Fermín
2011-10-07
The feasibility and performance of a procedure to evaluate previously developed one-shot optical sensors as single and selective analyte sensors for potassium, magnesium and hardness are presented. The procedure uses a conventional colour digital photographic camera as the detection system for simultaneous multianalyte detection. A 6.0 megapixel camera was used, and the procedure describes how it is possible to quantify potassium, magnesium and hardness simultaneously from the images captured, using multianalyte one-shot sensors based on ionophore-chromoionophore chemistry, employing the colour information computed from a defined region of interest on the sensing membrane. One of the colour channels in the red, green, blue (RGB) colour space is used to build the analytical parameter, the effective degree of protonation (1-α(eff)), in good agreement with the theoretical model. The linearization of the sigmoidal response function increases the limit of detection (LOD) and analytical range in all cases studied. The increases were from 5.4 × 10(-6) to 2.7 × 10(-7) M for potassium, from 1.4 × 10(-4) to 2.0 × 10(-6) M for magnesium and from 1.7 to 2.0 × 10(-2) mg L(-1) of CaCO(3) for hardness. The method's precision was determined in terms of the relative standard deviation (RSD%) which was from 2.4 to 7.6 for potassium, from 6.8 to 7.8 for magnesium and from 4.3 to 7.8 for hardness. The procedure was applied to the simultaneous determination of potassium, magnesium and hardness using multianalyte one-shot sensors in different types of waters and beverages in order to cover the entire application range, statistically validating the results against atomic absorption spectrometry as the reference procedure. Accordingly, this paper is an attempt to demonstrate the possibility of using a conventional digital camera as an analytical device to measure this type of one-shot sensor based on ionophore-chromoionophore chemistry instead of using conventional lab instrumentation.
NASA Technical Reports Server (NTRS)
Ebert, D. H.; Eppes, T. A.; Thomas, D. J.
1973-01-01
The impact of a conical scan versus a linear scan multispectral scanner (MSS) instrument was studied in terms of: (1) design modifications required in framing and continuous image recording devices; and (2) changes in configurations of an all-digital precision image processor. A baseline system was defined to provide the framework for comparison, and included pertinent spacecraft parameters, a conical MSS, a linear MSS, an image recording system, and an all-digital precision processor. Lateral offset pointing of the sensors over a range of plus or minus 20 deg was considered. The study addressed the conical scan impact on geometric, radiometric, and aperture correction of MSS data in terms of hardware and software considerations, system complexity, quality of corrections, throughput, and cost of implementation. It was concluded that: (1) if the MSS data are to be only film recorded, then there is only a nomial concial scan impact on the ground data processing system; and (2) if digital data are to be provided to users on computer compatible tapes in rectilinear format, then there is a significant conical scan impact on the ground data processing system.
Estimating plant area index for monitoring crop growth dynamics using Landsat-8 and RapidEye images
NASA Astrophysics Data System (ADS)
Shang, Jiali; Liu, Jiangui; Huffman, Ted; Qian, Budong; Pattey, Elizabeth; Wang, Jinfei; Zhao, Ting; Geng, Xiaoyuan; Kroetsch, David; Dong, Taifeng; Lantz, Nicholas
2014-01-01
This study investigates the use of two different optical sensors, the multispectral imager (MSI) onboard the RapidEye satellites and the operational land imager (OLI) onboard the Landsat-8 for mapping within-field variability of crop growth conditions and tracking the seasonal growth dynamics. The study was carried out in southern Ontario, Canada, during the 2013 growing season for three annual crops, corn, soybeans, and winter wheat. Plant area index (PAI) was measured at different growth stages using digital hemispherical photography at two corn fields, two winter wheat fields, and two soybean fields. Comparison between several conventional vegetation indices derived from concurrently acquired image data by the two sensors showed a good agreement. The two-band enhanced vegetation index (EVI2) and the normalized difference vegetation index (NDVI) were derived from the surface reflectance of the two sensors. The study showed that EVI2 was more resistant to saturation at high biomass range than NDVI. A linear relationship could be used for crop green effective PAI estimation from EVI2, with a coefficient of determination (R2) of 0.85 and root-mean-square error of 0.53. The estimated multitemporal product of green PAI was found to be able to capture the seasonal dynamics of the three crops.
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
Self-calibration for lensless color microscopy.
Flasseur, Olivier; Fournier, Corinne; Verrier, Nicolas; Denis, Loïc; Jolivet, Frédéric; Cazier, Anthony; Lépine, Thierry
2017-05-01
Lensless color microscopy (also called in-line digital color holography) is a recent quantitative 3D imaging method used in several areas including biomedical imaging and microfluidics. By targeting cost-effective and compact designs, the wavelength of the low-end sources used is known only imprecisely, in particular because of their dependence on temperature and power supply voltage. This imprecision is the source of biases during the reconstruction step. An additional source of error is the crosstalk phenomenon, i.e., the mixture in color sensors of signals originating from different color channels. We propose to use a parametric inverse problem approach to achieve self-calibration of a digital color holographic setup. This process provides an estimation of the central wavelengths and crosstalk. We show that taking the crosstalk phenomenon into account in the reconstruction step improves its accuracy.
Design and realization of an active SAR calibrator for TerraSAR-X
NASA Astrophysics Data System (ADS)
Dummer, Georg; Lenz, Rainer; Lutz, Benjamin; Kühl, Markus; Müller-Glaser, Klaus D.; Wiesbeck, Werner
2005-10-01
TerraSAR-X is a new earth observing satellite which will be launched in spring 2006. It carries a high resolution X-band SAR sensor. For high image data quality, accurate ground calibration targets are necessary. This paper describes a novel system concept for an active and highly integrated, digitally controlled SAR system calibrator. A total of 16 active transponder and receiver systems and 17 receiver only systems will be fabricated for a calibration campaign. The calibration units serve for absolute radiometric calibration of the SAR image data. Additionally, they are equipped with an extra receiver path for two dimensional satellite antenna pattern recognition. The calibrator is controlled by a dedicated digital Electronic Control Unit (ECU). The different voltages needed by the calibrator and the ECU are provided by the third main unit called Power Management Unit (PMU).
NASA Astrophysics Data System (ADS)
Kuester, M. A.
2015-12-01
Remote sensing is a powerful tool for monitoring changes on the surface of the Earth at a local or global scale. The use of data sets from different sensors across many platforms, or even a single sensor over time, can bring a wealth of information when exploring anthropogenic changes to the environment. For example, variations in crop yield and health for a specific region can be detected by observing changes in the spectral signature of the particular species under study. However, changes in the atmosphere, sun illumination and viewing geometries during image capture can result in inconsistent image data, hindering automated information extraction. Additionally, an incorrect spectral radiometric calibration will lead to false or misleading results. It is therefore critical that the data being used are normalized and calibrated on a regular basis to ensure that physically derived variables are as close to truth as is possible. Although most earth observing sensors are well-calibrated in a laboratory prior to launch, a change in the radiometric response of the system is inevitable due to thermal, mechanical or electrical effects caused during the rigors of launch or by the space environment itself. Outgassing and exposure to ultra-violet radiation will also have an effect on the sensor's filter responses. Pre-launch lamps and other laboratory calibration systems can also fall short in representing the actual output of the Sun. A presentation of the differences in the results of some example cases (e.g. geology, agriculture) derived for science variables using pre- and post-launch calibration will be presented using DigitalGlobe's WorldView-3 super spectral sensor, with bands in the visible and near infrared, as well as in the shortwave infrared. Important defects caused by an incomplete (i.e. pre-launch only) calibration will be discussed using validation data where available. In addition, the benefits of using a well-validated surface reflectance product will be presented. DigitalGlobe is committed to providing ongoing assessment of the radiometric performance of our sensors, which allows customers to get the most out of our extensive multi-sensor constellation.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
Neural networks: Application to medical imaging
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.
1994-01-01
The research mission is the development of computer assisted diagnostic (CAD) methods for improved diagnosis of medical images including digital x-ray sensors and tomographic imaging modalities. The CAD algorithms include advanced methods for adaptive nonlinear filters for image noise suppression, hybrid wavelet methods for feature segmentation and enhancement, and high convergence neural networks for feature detection and VLSI implementation of neural networks for real time analysis. Other missions include (1) implementation of CAD methods on hospital based picture archiving computer systems (PACS) and information networks for central and remote diagnosis and (2) collaboration with defense and medical industry, NASA, and federal laboratories in the area of dual use technology conversion from defense or aerospace to medicine.
Design on the x-ray oral digital image display card
NASA Astrophysics Data System (ADS)
Wang, Liping; Gu, Guohua; Chen, Qian
2009-10-01
According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the circuit components have been adjusted. And high-quality images have been obtained.
LANDSAT-4 multispectral scanner (MSS) subsystem radiometric characterization
NASA Technical Reports Server (NTRS)
Alford, W. (Editor); Barker, J. (Editor); Clark, B. P.; Dasgupta, R.
1983-01-01
The multispectral band scanner (mass) and its spectral characteristics are described and methods are given for relating video digital levels on computer compatible tapes to radiance into the sensor. Topics covered include prelaunch calibration procedures and postlaunch radiometric processng. Examples of current data resident on the MSS image processing system are included. The MSS on LANDSAT 4 is compared with the scanners on earlier LANDSAT satellites.
2017-08-01
filtering, correlation and radio- astronomy . In this report approximate transforms that closely follow the DFT have been studied and found. The approximate...communications, data networks, sensor networks, cognitive radio, radar and beamforming, imaging, filtering, correlation and radio- astronomy . FFTs efficiently...public release; distribution is unlimited. 4.3 Digital Hardware and Design Architectures Collaboration for Astronomy Signal Processing and Electronics
Mapping urban land cover from space: Some observations for future progress
NASA Technical Reports Server (NTRS)
Gaydos, L.
1982-01-01
The multilevel classification system adopted by the USGS for operational mapping of land use and land cover at levels 1 and 2 is discussed and the successes and failures of mapping land cover from LANDSAT digital data are reviewed. Techniques used for image interpretation and their relationships to sensor parameters are examined. The requirements for mapping levels 2 and 3 classes are considered.
Development of Thermal Infrared Sensor to Supplement Operational Land Imager
NASA Technical Reports Server (NTRS)
Shu, Peter; Waczynski, Augustyn; Kan, Emily; Wen, Yiting; Rosenberry, Robert
2012-01-01
The thermal infrared sensor (TIRS) is a quantum well infrared photodetector (QWIP)-based instrument intended to supplement the Operational Land Imager (OLI) for the Landsat Data Continuity Mission (LDCM). The TIRS instrument is a far-infrared imager operating in the pushbroom mode with two IR channels: 10.8 and 12 m. The focal plane will contain three 640 512 QWIP arrays mounted onto a silicon substrate. The readout integrated circuit (ROIC) addresses each pixel on the QWIP arrays and reads out the pixel value (signal). The ROIC is controlled by the focal plane electronics (FPE) by means of clock signals and bias voltage value. The means of how the FPE is designed to control and interact with the TIRS focal plane assembly (FPA) is the basis for this work. The technology developed under the FPE is for the TIRS focal plane assembly (FPA). The FPE must interact with the FPA to command and control the FPA, extract analog signals from the FPA, and then convert the analog signals to digital format and send them via a serial link (USB) to a computer. The FPE accomplishes the described functions by converting electrical power from generic power supplies to the required bias power that is needed by the FPA. The FPE also generates digital clocking signals and shifts the typical transistor-to-transistor logic (TTL) to }5 V required by the FPA. The FPE also uses an application- specific integrated circuit (ASIC) named System Image, Digitizing, Enhancing, Controlling, And Retrieving (SIDECAR) from Teledyne Corp. to generate the clocking patterns commanded by the user. The uniqueness of the FPE for TIRS lies in that the TIRS FPA has three QWIP detector arrays, and all three detector arrays must be in synchronization while in operation. This is to avoid data skewing while observing Earth flying in space. The observing scenario may be customized by uploading new control software to the SIDECAR.
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
Color filter array design based on a human visual model
NASA Astrophysics Data System (ADS)
Parmar, Manu; Reeves, Stanley J.
2004-05-01
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.
de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba
2011-01-01
The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607